00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3663 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3265 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.088 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.103 Using shallow fetch with depth 1 00:00:00.103 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.103 > git --version # timeout=10 00:00:00.119 > git --version # 'git version 2.39.2' 00:00:00.119 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.140 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.140 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.236 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.249 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.259 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.259 > git config core.sparsecheckout # timeout=10 00:00:04.269 > git read-tree -mu HEAD # timeout=10 00:00:04.284 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.300 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.300 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.379 [Pipeline] Start of Pipeline 00:00:04.394 [Pipeline] library 00:00:04.396 Loading library shm_lib@master 00:00:04.396 Library shm_lib@master is cached. Copying from home. 00:00:04.413 [Pipeline] node 00:00:04.426 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.427 [Pipeline] { 00:00:04.436 [Pipeline] catchError 00:00:04.438 [Pipeline] { 00:00:04.448 [Pipeline] wrap 00:00:04.457 [Pipeline] { 00:00:04.463 [Pipeline] stage 00:00:04.465 [Pipeline] { (Prologue) 00:00:04.628 [Pipeline] sh 00:00:04.900 + logger -p user.info -t JENKINS-CI 00:00:04.915 [Pipeline] echo 00:00:04.916 Node: GP11 00:00:04.922 [Pipeline] sh 00:00:05.213 [Pipeline] setCustomBuildProperty 00:00:05.225 [Pipeline] echo 00:00:05.226 Cleanup processes 00:00:05.231 [Pipeline] sh 00:00:05.507 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.507 867614 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.518 [Pipeline] sh 00:00:05.790 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.790 ++ awk '{print $1}' 00:00:05.790 ++ grep -v 'sudo pgrep' 00:00:05.790 + sudo kill -9 00:00:05.790 + true 00:00:05.804 [Pipeline] cleanWs 00:00:05.812 [WS-CLEANUP] Deleting project workspace... 00:00:05.812 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.817 [WS-CLEANUP] done 00:00:05.821 [Pipeline] setCustomBuildProperty 00:00:05.835 [Pipeline] sh 00:00:06.107 + sudo git config --global --replace-all safe.directory '*' 00:00:06.165 [Pipeline] httpRequest 00:00:06.195 [Pipeline] echo 00:00:06.197 Sorcerer 10.211.164.101 is alive 00:00:06.204 [Pipeline] httpRequest 00:00:06.208 HttpMethod: GET 00:00:06.208 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.208 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.210 Response Code: HTTP/1.1 200 OK 00:00:06.211 Success: Status code 200 is in the accepted range: 200,404 00:00:06.211 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.216 [Pipeline] sh 00:00:07.499 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.514 [Pipeline] httpRequest 00:00:07.543 [Pipeline] echo 00:00:07.545 Sorcerer 10.211.164.101 is alive 00:00:07.553 [Pipeline] httpRequest 00:00:07.557 HttpMethod: GET 00:00:07.558 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.558 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.574 Response Code: HTTP/1.1 200 OK 00:00:07.575 Success: Status code 200 is in the accepted range: 200,404 00:00:07.575 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:05.174 [Pipeline] sh 00:01:05.458 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:07.999 [Pipeline] sh 00:01:08.280 + git -C spdk log --oneline -n5 00:01:08.280 719d03c6a sock/uring: only register net impl if supported 00:01:08.280 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:08.280 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:08.280 6c7c1f57e accel: add sequence outstanding stat 00:01:08.280 3bc8e6a26 accel: add utility to put task 00:01:08.297 [Pipeline] withCredentials 00:01:08.308 > git --version # timeout=10 00:01:08.319 > git --version # 'git version 2.39.2' 00:01:08.336 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:08.338 [Pipeline] { 00:01:08.346 [Pipeline] retry 00:01:08.348 [Pipeline] { 00:01:08.365 [Pipeline] sh 00:01:08.646 + git ls-remote http://dpdk.org/git/dpdk main 00:01:11.954 [Pipeline] } 00:01:11.974 [Pipeline] // retry 00:01:11.979 [Pipeline] } 00:01:11.999 [Pipeline] // withCredentials 00:01:12.009 [Pipeline] httpRequest 00:01:12.028 [Pipeline] echo 00:01:12.030 Sorcerer 10.211.164.101 is alive 00:01:12.038 [Pipeline] httpRequest 00:01:12.043 HttpMethod: GET 00:01:12.043 URL: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:12.044 Sending request to url: http://10.211.164.101/packages/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:12.045 Response Code: HTTP/1.1 200 OK 00:01:12.045 Success: Status code 200 is in the accepted range: 200,404 00:01:12.046 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:19.127 [Pipeline] sh 00:01:19.410 + tar --no-same-owner -xf dpdk_fa8d2f7f28524a6c8defa3dcd94f5aa131aae084.tar.gz 00:01:20.800 [Pipeline] sh 00:01:21.104 + git -C dpdk log --oneline -n5 00:01:21.104 fa8d2f7f28 version: 24.07-rc2 00:01:21.104 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:21.104 2227c0ed9a maintainers: update for Microsoft drivers 00:01:21.104 8385370337 maintainers: update for Arm 00:01:21.104 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:21.122 [Pipeline] } 00:01:21.139 [Pipeline] // stage 00:01:21.148 [Pipeline] stage 00:01:21.150 [Pipeline] { (Prepare) 00:01:21.171 [Pipeline] writeFile 00:01:21.189 [Pipeline] sh 00:01:21.469 + logger -p user.info -t JENKINS-CI 00:01:21.484 [Pipeline] sh 00:01:21.767 + logger -p user.info -t JENKINS-CI 00:01:21.781 [Pipeline] sh 00:01:22.058 + cat autorun-spdk.conf 00:01:22.058 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.058 SPDK_TEST_NVMF=1 00:01:22.058 SPDK_TEST_NVME_CLI=1 00:01:22.058 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.058 SPDK_TEST_NVMF_NICS=e810 00:01:22.058 SPDK_TEST_VFIOUSER=1 00:01:22.058 SPDK_RUN_UBSAN=1 00:01:22.058 NET_TYPE=phy 00:01:22.058 SPDK_TEST_NATIVE_DPDK=main 00:01:22.058 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.065 RUN_NIGHTLY=1 00:01:22.070 [Pipeline] readFile 00:01:22.097 [Pipeline] withEnv 00:01:22.099 [Pipeline] { 00:01:22.112 [Pipeline] sh 00:01:22.391 + set -ex 00:01:22.391 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:22.391 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.391 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.391 ++ SPDK_TEST_NVMF=1 00:01:22.391 ++ SPDK_TEST_NVME_CLI=1 00:01:22.391 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.391 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.391 ++ SPDK_TEST_VFIOUSER=1 00:01:22.391 ++ SPDK_RUN_UBSAN=1 00:01:22.391 ++ NET_TYPE=phy 00:01:22.391 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:22.391 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.391 ++ RUN_NIGHTLY=1 00:01:22.391 + case $SPDK_TEST_NVMF_NICS in 00:01:22.391 + DRIVERS=ice 00:01:22.391 + [[ tcp == \r\d\m\a ]] 00:01:22.391 + [[ -n ice ]] 00:01:22.391 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:22.391 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:22.391 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:22.391 rmmod: ERROR: Module irdma is not currently loaded 00:01:22.391 rmmod: ERROR: Module i40iw is not currently loaded 00:01:22.391 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:22.391 + true 00:01:22.391 + for D in $DRIVERS 00:01:22.391 + sudo modprobe ice 00:01:22.391 + exit 0 00:01:22.399 [Pipeline] } 00:01:22.412 [Pipeline] // withEnv 00:01:22.417 [Pipeline] } 00:01:22.430 [Pipeline] // stage 00:01:22.440 [Pipeline] catchError 00:01:22.441 [Pipeline] { 00:01:22.456 [Pipeline] timeout 00:01:22.456 Timeout set to expire in 50 min 00:01:22.458 [Pipeline] { 00:01:22.471 [Pipeline] stage 00:01:22.473 [Pipeline] { (Tests) 00:01:22.487 [Pipeline] sh 00:01:22.765 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.765 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.765 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.765 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:22.765 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.765 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.765 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:22.765 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.765 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:22.765 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:22.765 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:22.765 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:22.765 + source /etc/os-release 00:01:22.765 ++ NAME='Fedora Linux' 00:01:22.765 ++ VERSION='38 (Cloud Edition)' 00:01:22.765 ++ ID=fedora 00:01:22.765 ++ VERSION_ID=38 00:01:22.765 ++ VERSION_CODENAME= 00:01:22.765 ++ PLATFORM_ID=platform:f38 00:01:22.765 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:22.765 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.765 ++ LOGO=fedora-logo-icon 00:01:22.765 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:22.765 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.765 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:22.765 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.765 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.765 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.765 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:22.765 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.765 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:22.765 ++ SUPPORT_END=2024-05-14 00:01:22.765 ++ VARIANT='Cloud Edition' 00:01:22.765 ++ VARIANT_ID=cloud 00:01:22.765 + uname -a 00:01:22.765 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:22.765 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.701 Hugepages 00:01:23.701 node hugesize free / total 00:01:23.701 node0 1048576kB 0 / 0 00:01:23.701 node0 2048kB 0 / 0 00:01:23.701 node1 1048576kB 0 / 0 00:01:23.701 node1 2048kB 0 / 0 00:01:23.701 00:01:23.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.701 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:23.701 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:23.701 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:23.701 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:23.701 + rm -f /tmp/spdk-ld-path 00:01:23.701 + source autorun-spdk.conf 00:01:23.701 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.701 ++ SPDK_TEST_NVMF=1 00:01:23.701 ++ SPDK_TEST_NVME_CLI=1 00:01:23.701 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.701 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.701 ++ SPDK_TEST_VFIOUSER=1 00:01:23.701 ++ SPDK_RUN_UBSAN=1 00:01:23.701 ++ NET_TYPE=phy 00:01:23.701 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:23.701 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.701 ++ RUN_NIGHTLY=1 00:01:23.701 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.701 + [[ -n '' ]] 00:01:23.701 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.958 + for M in /var/spdk/build-*-manifest.txt 00:01:23.958 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.958 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.958 + for M in /var/spdk/build-*-manifest.txt 00:01:23.958 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.958 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.958 ++ uname 00:01:23.958 + [[ Linux == \L\i\n\u\x ]] 00:01:23.958 + sudo dmesg -T 00:01:23.958 + sudo dmesg --clear 00:01:23.958 + dmesg_pid=868946 00:01:23.958 + [[ Fedora Linux == FreeBSD ]] 00:01:23.958 + sudo dmesg -Tw 00:01:23.958 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.958 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.958 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.958 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.959 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.959 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.959 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.959 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.959 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.959 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.959 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.959 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.959 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.959 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.959 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.959 Test configuration: 00:01:23.959 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.959 SPDK_TEST_NVMF=1 00:01:23.959 SPDK_TEST_NVME_CLI=1 00:01:23.959 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.959 SPDK_TEST_NVMF_NICS=e810 00:01:23.959 SPDK_TEST_VFIOUSER=1 00:01:23.959 SPDK_RUN_UBSAN=1 00:01:23.959 NET_TYPE=phy 00:01:23.959 SPDK_TEST_NATIVE_DPDK=main 00:01:23.959 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.959 RUN_NIGHTLY=1 15:12:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.959 15:12:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.959 15:12:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.959 15:12:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.959 15:12:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.959 15:12:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.959 15:12:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.959 15:12:54 -- paths/export.sh@5 -- $ export PATH 00:01:23.959 15:12:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.959 15:12:54 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.959 15:12:54 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:23.959 15:12:54 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720876374.XXXXXX 00:01:23.959 15:12:54 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720876374.UtW7Dq 00:01:23.959 15:12:54 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:23.959 15:12:54 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:01:23.959 15:12:54 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.959 15:12:54 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:23.959 15:12:54 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.959 15:12:54 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.959 15:12:54 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:23.959 15:12:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:23.959 15:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.959 15:12:54 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:23.959 15:12:54 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:23.959 15:12:54 -- pm/common@17 -- $ local monitor 00:01:23.959 15:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.959 15:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.959 15:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.959 15:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.959 15:12:54 -- pm/common@21 -- $ date +%s 00:01:23.959 15:12:54 -- pm/common@21 -- $ date +%s 00:01:23.959 15:12:54 -- pm/common@25 -- $ sleep 1 00:01:23.959 15:12:54 -- pm/common@21 -- $ date +%s 00:01:23.959 15:12:54 -- pm/common@21 -- $ date +%s 00:01:23.959 15:12:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720876374 00:01:23.959 15:12:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720876374 00:01:23.959 15:12:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720876374 00:01:23.959 15:12:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720876374 00:01:23.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720876374_collect-vmstat.pm.log 00:01:23.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720876374_collect-cpu-load.pm.log 00:01:23.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720876374_collect-cpu-temp.pm.log 00:01:23.959 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720876374_collect-bmc-pm.bmc.pm.log 00:01:24.891 15:12:55 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:24.891 15:12:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.891 15:12:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.891 15:12:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.891 15:12:55 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.891 Sat Jul 13 01:12:55 PM UTC 2024 00:01:24.891 15:12:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.891 v24.09-pre-202-g719d03c6a 00:01:24.891 15:12:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.891 15:12:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.891 15:12:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.891 15:12:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:24.891 15:12:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:24.891 15:12:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.891 ************************************ 00:01:24.891 START TEST ubsan 00:01:24.891 ************************************ 00:01:24.891 15:12:55 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:24.891 using ubsan 00:01:24.891 00:01:24.891 real 0m0.000s 00:01:24.891 user 0m0.000s 00:01:24.891 sys 0m0.000s 00:01:24.891 15:12:55 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:24.891 15:12:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.891 ************************************ 00:01:24.891 END TEST ubsan 00:01:24.891 ************************************ 00:01:24.891 15:12:55 -- common/autotest_common.sh@1142 -- $ return 0 00:01:24.891 15:12:55 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:24.891 15:12:55 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:24.891 15:12:55 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:24.891 15:12:55 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:24.891 15:12:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:24.891 15:12:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.149 ************************************ 00:01:25.149 START TEST build_native_dpdk 00:01:25.149 ************************************ 00:01:25.149 15:12:55 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:25.149 fa8d2f7f28 version: 24.07-rc2 00:01:25.149 d4bc3c2e01 maintainers: update for cxgbe driver 00:01:25.149 2227c0ed9a maintainers: update for Microsoft drivers 00:01:25.149 8385370337 maintainers: update for Arm 00:01:25.149 62edcfd6ea net/nfp: support parsing packet type in vector Rx 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc2 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.149 15:12:55 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc2 21.11.0 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc2 '<' 21.11.0 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:25.150 15:12:55 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:25.150 patching file config/rte_config.h 00:01:25.150 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:25.150 15:12:55 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.333 The Meson build system 00:01:29.333 Version: 1.3.1 00:01:29.333 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.333 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:29.333 Build type: native build 00:01:29.333 Program cat found: YES (/usr/bin/cat) 00:01:29.333 Project name: DPDK 00:01:29.333 Project version: 24.07.0-rc2 00:01:29.333 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.333 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:29.333 Host machine cpu family: x86_64 00:01:29.333 Host machine cpu: x86_64 00:01:29.333 Message: ## Building in Developer Mode ## 00:01:29.333 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.333 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:29.333 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.333 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:29.333 Program cat found: YES (/usr/bin/cat) 00:01:29.333 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:29.333 Compiler for C supports arguments -march=native: YES 00:01:29.333 Checking for size of "void *" : 8 00:01:29.333 Checking for size of "void *" : 8 (cached) 00:01:29.333 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:29.333 Library m found: YES 00:01:29.333 Library numa found: YES 00:01:29.333 Has header "numaif.h" : YES 00:01:29.333 Library fdt found: NO 00:01:29.333 Library execinfo found: NO 00:01:29.333 Has header "execinfo.h" : YES 00:01:29.333 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.333 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.333 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.333 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.333 Run-time dependency openssl found: YES 3.0.9 00:01:29.333 Run-time dependency libpcap found: YES 1.10.4 00:01:29.333 Has header "pcap.h" with dependency libpcap: YES 00:01:29.333 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.333 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.333 Compiler for C supports arguments -Wformat: YES 00:01:29.333 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.333 Compiler for C supports arguments -Wformat-security: NO 00:01:29.333 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.333 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.333 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.333 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.333 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.333 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.333 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.333 Compiler for C supports arguments -Wundef: YES 00:01:29.333 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.333 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.333 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.333 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.333 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.333 Program objdump found: YES (/usr/bin/objdump) 00:01:29.333 Compiler for C supports arguments -mavx512f: YES 00:01:29.333 Checking if "AVX512 checking" compiles: YES 00:01:29.333 Fetching value of define "__SSE4_2__" : 1 00:01:29.333 Fetching value of define "__AES__" : 1 00:01:29.333 Fetching value of define "__AVX__" : 1 00:01:29.333 Fetching value of define "__AVX2__" : (undefined) 00:01:29.333 Fetching value of define "__AVX512BW__" : (undefined) 00:01:29.333 Fetching value of define "__AVX512CD__" : (undefined) 00:01:29.333 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:29.333 Fetching value of define "__AVX512F__" : (undefined) 00:01:29.333 Fetching value of define "__AVX512VL__" : (undefined) 00:01:29.333 Fetching value of define "__PCLMUL__" : 1 00:01:29.333 Fetching value of define "__RDRND__" : 1 00:01:29.333 Fetching value of define "__RDSEED__" : (undefined) 00:01:29.333 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:29.333 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.333 Message: lib/log: Defining dependency "log" 00:01:29.333 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.333 Message: lib/argparse: Defining dependency "argparse" 00:01:29.333 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.333 Checking for function "getentropy" : NO 00:01:29.333 Message: lib/eal: Defining dependency "eal" 00:01:29.333 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:29.333 Message: lib/ring: Defining dependency "ring" 00:01:29.333 Message: lib/rcu: Defining dependency "rcu" 00:01:29.334 Message: lib/mempool: Defining dependency "mempool" 00:01:29.334 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.334 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.334 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.334 Compiler for C supports arguments -mpclmul: YES 00:01:29.334 Compiler for C supports arguments -maes: YES 00:01:29.334 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.334 Compiler for C supports arguments -mavx512bw: YES 00:01:29.334 Compiler for C supports arguments -mavx512dq: YES 00:01:29.334 Compiler for C supports arguments -mavx512vl: YES 00:01:29.334 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.334 Compiler for C supports arguments -mavx2: YES 00:01:29.334 Compiler for C supports arguments -mavx: YES 00:01:29.334 Message: lib/net: Defining dependency "net" 00:01:29.334 Message: lib/meter: Defining dependency "meter" 00:01:29.334 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.334 Message: lib/pci: Defining dependency "pci" 00:01:29.334 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.334 Message: lib/metrics: Defining dependency "metrics" 00:01:29.334 Message: lib/hash: Defining dependency "hash" 00:01:29.334 Message: lib/timer: Defining dependency "timer" 00:01:29.334 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.334 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:29.334 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:29.334 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:29.334 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:29.334 Message: lib/acl: Defining dependency "acl" 00:01:29.334 Message: lib/bbdev: Defining dependency "bbdev" 00:01:29.334 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:29.334 Run-time dependency libelf found: YES 0.190 00:01:29.334 Message: lib/bpf: Defining dependency "bpf" 00:01:29.334 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:29.334 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.334 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.334 Message: lib/distributor: Defining dependency "distributor" 00:01:29.334 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.334 Message: lib/efd: Defining dependency "efd" 00:01:29.334 Message: lib/eventdev: Defining dependency "eventdev" 00:01:29.334 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:29.334 Message: lib/gpudev: Defining dependency "gpudev" 00:01:29.334 Message: lib/gro: Defining dependency "gro" 00:01:29.334 Message: lib/gso: Defining dependency "gso" 00:01:29.334 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:29.334 Message: lib/jobstats: Defining dependency "jobstats" 00:01:29.334 Message: lib/latencystats: Defining dependency "latencystats" 00:01:29.334 Message: lib/lpm: Defining dependency "lpm" 00:01:29.334 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.334 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.334 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:29.334 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:29.334 Message: lib/member: Defining dependency "member" 00:01:29.334 Message: lib/pcapng: Defining dependency "pcapng" 00:01:29.334 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.334 Message: lib/power: Defining dependency "power" 00:01:29.334 Message: lib/rawdev: Defining dependency "rawdev" 00:01:29.334 Message: lib/regexdev: Defining dependency "regexdev" 00:01:29.334 Message: lib/mldev: Defining dependency "mldev" 00:01:29.334 Message: lib/rib: Defining dependency "rib" 00:01:29.334 Message: lib/reorder: Defining dependency "reorder" 00:01:29.334 Message: lib/sched: Defining dependency "sched" 00:01:29.334 Message: lib/security: Defining dependency "security" 00:01:29.334 Message: lib/stack: Defining dependency "stack" 00:01:29.334 Has header "linux/userfaultfd.h" : YES 00:01:29.334 Has header "linux/vduse.h" : YES 00:01:29.334 Message: lib/vhost: Defining dependency "vhost" 00:01:29.334 Message: lib/ipsec: Defining dependency "ipsec" 00:01:29.334 Message: lib/pdcp: Defining dependency "pdcp" 00:01:29.334 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.334 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.334 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:29.334 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.334 Message: lib/fib: Defining dependency "fib" 00:01:29.334 Message: lib/port: Defining dependency "port" 00:01:29.334 Message: lib/pdump: Defining dependency "pdump" 00:01:29.334 Message: lib/table: Defining dependency "table" 00:01:29.334 Message: lib/pipeline: Defining dependency "pipeline" 00:01:29.334 Message: lib/graph: Defining dependency "graph" 00:01:29.334 Message: lib/node: Defining dependency "node" 00:01:30.712 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.712 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.712 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.712 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.712 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:30.712 Compiler for C supports arguments -Wno-unused-value: YES 00:01:30.712 Compiler for C supports arguments -Wno-format: YES 00:01:30.712 Compiler for C supports arguments -Wno-format-security: YES 00:01:30.712 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:30.712 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:30.712 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:30.712 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:30.712 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.712 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.712 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:30.712 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:30.712 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:30.712 Has header "sys/epoll.h" : YES 00:01:30.712 Program doxygen found: YES (/usr/bin/doxygen) 00:01:30.712 Configuring doxy-api-html.conf using configuration 00:01:30.712 Configuring doxy-api-man.conf using configuration 00:01:30.712 Program mandb found: YES (/usr/bin/mandb) 00:01:30.712 Program sphinx-build found: NO 00:01:30.712 Configuring rte_build_config.h using configuration 00:01:30.712 Message: 00:01:30.712 ================= 00:01:30.712 Applications Enabled 00:01:30.712 ================= 00:01:30.712 00:01:30.712 apps: 00:01:30.712 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:30.712 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:30.712 test-pmd, test-regex, test-sad, test-security-perf, 00:01:30.712 00:01:30.712 Message: 00:01:30.712 ================= 00:01:30.712 Libraries Enabled 00:01:30.712 ================= 00:01:30.712 00:01:30.712 libs: 00:01:30.712 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:30.712 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:30.712 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:30.712 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:30.712 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:30.712 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:30.712 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:30.712 graph, node, 00:01:30.712 00:01:30.712 Message: 00:01:30.712 =============== 00:01:30.712 Drivers Enabled 00:01:30.712 =============== 00:01:30.712 00:01:30.712 common: 00:01:30.712 00:01:30.712 bus: 00:01:30.712 pci, vdev, 00:01:30.712 mempool: 00:01:30.712 ring, 00:01:30.712 dma: 00:01:30.712 00:01:30.712 net: 00:01:30.712 i40e, 00:01:30.712 raw: 00:01:30.712 00:01:30.712 crypto: 00:01:30.712 00:01:30.712 compress: 00:01:30.712 00:01:30.712 regex: 00:01:30.712 00:01:30.712 ml: 00:01:30.712 00:01:30.712 vdpa: 00:01:30.712 00:01:30.712 event: 00:01:30.712 00:01:30.712 baseband: 00:01:30.712 00:01:30.712 gpu: 00:01:30.712 00:01:30.712 00:01:30.712 Message: 00:01:30.712 ================= 00:01:30.712 Content Skipped 00:01:30.712 ================= 00:01:30.712 00:01:30.712 apps: 00:01:30.712 00:01:30.712 libs: 00:01:30.712 00:01:30.712 drivers: 00:01:30.712 common/cpt: not in enabled drivers build config 00:01:30.712 common/dpaax: not in enabled drivers build config 00:01:30.712 common/iavf: not in enabled drivers build config 00:01:30.712 common/idpf: not in enabled drivers build config 00:01:30.712 common/ionic: not in enabled drivers build config 00:01:30.712 common/mvep: not in enabled drivers build config 00:01:30.712 common/octeontx: not in enabled drivers build config 00:01:30.712 bus/auxiliary: not in enabled drivers build config 00:01:30.712 bus/cdx: not in enabled drivers build config 00:01:30.712 bus/dpaa: not in enabled drivers build config 00:01:30.712 bus/fslmc: not in enabled drivers build config 00:01:30.712 bus/ifpga: not in enabled drivers build config 00:01:30.712 bus/platform: not in enabled drivers build config 00:01:30.712 bus/uacce: not in enabled drivers build config 00:01:30.712 bus/vmbus: not in enabled drivers build config 00:01:30.712 common/cnxk: not in enabled drivers build config 00:01:30.712 common/mlx5: not in enabled drivers build config 00:01:30.712 common/nfp: not in enabled drivers build config 00:01:30.712 common/nitrox: not in enabled drivers build config 00:01:30.712 common/qat: not in enabled drivers build config 00:01:30.712 common/sfc_efx: not in enabled drivers build config 00:01:30.712 mempool/bucket: not in enabled drivers build config 00:01:30.712 mempool/cnxk: not in enabled drivers build config 00:01:30.712 mempool/dpaa: not in enabled drivers build config 00:01:30.712 mempool/dpaa2: not in enabled drivers build config 00:01:30.712 mempool/octeontx: not in enabled drivers build config 00:01:30.712 mempool/stack: not in enabled drivers build config 00:01:30.712 dma/cnxk: not in enabled drivers build config 00:01:30.712 dma/dpaa: not in enabled drivers build config 00:01:30.712 dma/dpaa2: not in enabled drivers build config 00:01:30.712 dma/hisilicon: not in enabled drivers build config 00:01:30.712 dma/idxd: not in enabled drivers build config 00:01:30.712 dma/ioat: not in enabled drivers build config 00:01:30.712 dma/odm: not in enabled drivers build config 00:01:30.712 dma/skeleton: not in enabled drivers build config 00:01:30.712 net/af_packet: not in enabled drivers build config 00:01:30.712 net/af_xdp: not in enabled drivers build config 00:01:30.712 net/ark: not in enabled drivers build config 00:01:30.712 net/atlantic: not in enabled drivers build config 00:01:30.712 net/avp: not in enabled drivers build config 00:01:30.712 net/axgbe: not in enabled drivers build config 00:01:30.712 net/bnx2x: not in enabled drivers build config 00:01:30.712 net/bnxt: not in enabled drivers build config 00:01:30.712 net/bonding: not in enabled drivers build config 00:01:30.712 net/cnxk: not in enabled drivers build config 00:01:30.712 net/cpfl: not in enabled drivers build config 00:01:30.712 net/cxgbe: not in enabled drivers build config 00:01:30.712 net/dpaa: not in enabled drivers build config 00:01:30.712 net/dpaa2: not in enabled drivers build config 00:01:30.712 net/e1000: not in enabled drivers build config 00:01:30.712 net/ena: not in enabled drivers build config 00:01:30.712 net/enetc: not in enabled drivers build config 00:01:30.712 net/enetfec: not in enabled drivers build config 00:01:30.712 net/enic: not in enabled drivers build config 00:01:30.712 net/failsafe: not in enabled drivers build config 00:01:30.712 net/fm10k: not in enabled drivers build config 00:01:30.712 net/gve: not in enabled drivers build config 00:01:30.712 net/hinic: not in enabled drivers build config 00:01:30.712 net/hns3: not in enabled drivers build config 00:01:30.712 net/iavf: not in enabled drivers build config 00:01:30.712 net/ice: not in enabled drivers build config 00:01:30.712 net/idpf: not in enabled drivers build config 00:01:30.712 net/igc: not in enabled drivers build config 00:01:30.712 net/ionic: not in enabled drivers build config 00:01:30.712 net/ipn3ke: not in enabled drivers build config 00:01:30.712 net/ixgbe: not in enabled drivers build config 00:01:30.712 net/mana: not in enabled drivers build config 00:01:30.712 net/memif: not in enabled drivers build config 00:01:30.712 net/mlx4: not in enabled drivers build config 00:01:30.712 net/mlx5: not in enabled drivers build config 00:01:30.712 net/mvneta: not in enabled drivers build config 00:01:30.712 net/mvpp2: not in enabled drivers build config 00:01:30.712 net/netvsc: not in enabled drivers build config 00:01:30.712 net/nfb: not in enabled drivers build config 00:01:30.712 net/nfp: not in enabled drivers build config 00:01:30.712 net/ngbe: not in enabled drivers build config 00:01:30.712 net/null: not in enabled drivers build config 00:01:30.712 net/octeontx: not in enabled drivers build config 00:01:30.712 net/octeon_ep: not in enabled drivers build config 00:01:30.712 net/pcap: not in enabled drivers build config 00:01:30.712 net/pfe: not in enabled drivers build config 00:01:30.712 net/qede: not in enabled drivers build config 00:01:30.712 net/ring: not in enabled drivers build config 00:01:30.712 net/sfc: not in enabled drivers build config 00:01:30.712 net/softnic: not in enabled drivers build config 00:01:30.712 net/tap: not in enabled drivers build config 00:01:30.712 net/thunderx: not in enabled drivers build config 00:01:30.712 net/txgbe: not in enabled drivers build config 00:01:30.712 net/vdev_netvsc: not in enabled drivers build config 00:01:30.712 net/vhost: not in enabled drivers build config 00:01:30.712 net/virtio: not in enabled drivers build config 00:01:30.712 net/vmxnet3: not in enabled drivers build config 00:01:30.712 raw/cnxk_bphy: not in enabled drivers build config 00:01:30.712 raw/cnxk_gpio: not in enabled drivers build config 00:01:30.712 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:30.712 raw/ifpga: not in enabled drivers build config 00:01:30.712 raw/ntb: not in enabled drivers build config 00:01:30.712 raw/skeleton: not in enabled drivers build config 00:01:30.713 crypto/armv8: not in enabled drivers build config 00:01:30.713 crypto/bcmfs: not in enabled drivers build config 00:01:30.713 crypto/caam_jr: not in enabled drivers build config 00:01:30.713 crypto/ccp: not in enabled drivers build config 00:01:30.713 crypto/cnxk: not in enabled drivers build config 00:01:30.713 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.713 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.713 crypto/ionic: not in enabled drivers build config 00:01:30.713 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.713 crypto/mlx5: not in enabled drivers build config 00:01:30.713 crypto/mvsam: not in enabled drivers build config 00:01:30.713 crypto/nitrox: not in enabled drivers build config 00:01:30.713 crypto/null: not in enabled drivers build config 00:01:30.713 crypto/octeontx: not in enabled drivers build config 00:01:30.713 crypto/openssl: not in enabled drivers build config 00:01:30.713 crypto/scheduler: not in enabled drivers build config 00:01:30.713 crypto/uadk: not in enabled drivers build config 00:01:30.713 crypto/virtio: not in enabled drivers build config 00:01:30.713 compress/isal: not in enabled drivers build config 00:01:30.713 compress/mlx5: not in enabled drivers build config 00:01:30.713 compress/nitrox: not in enabled drivers build config 00:01:30.713 compress/octeontx: not in enabled drivers build config 00:01:30.713 compress/uadk: not in enabled drivers build config 00:01:30.713 compress/zlib: not in enabled drivers build config 00:01:30.713 regex/mlx5: not in enabled drivers build config 00:01:30.713 regex/cn9k: not in enabled drivers build config 00:01:30.713 ml/cnxk: not in enabled drivers build config 00:01:30.713 vdpa/ifc: not in enabled drivers build config 00:01:30.713 vdpa/mlx5: not in enabled drivers build config 00:01:30.713 vdpa/nfp: not in enabled drivers build config 00:01:30.713 vdpa/sfc: not in enabled drivers build config 00:01:30.713 event/cnxk: not in enabled drivers build config 00:01:30.713 event/dlb2: not in enabled drivers build config 00:01:30.713 event/dpaa: not in enabled drivers build config 00:01:30.713 event/dpaa2: not in enabled drivers build config 00:01:30.713 event/dsw: not in enabled drivers build config 00:01:30.713 event/opdl: not in enabled drivers build config 00:01:30.713 event/skeleton: not in enabled drivers build config 00:01:30.713 event/sw: not in enabled drivers build config 00:01:30.713 event/octeontx: not in enabled drivers build config 00:01:30.713 baseband/acc: not in enabled drivers build config 00:01:30.713 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:30.713 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:30.713 baseband/la12xx: not in enabled drivers build config 00:01:30.713 baseband/null: not in enabled drivers build config 00:01:30.713 baseband/turbo_sw: not in enabled drivers build config 00:01:30.713 gpu/cuda: not in enabled drivers build config 00:01:30.713 00:01:30.713 00:01:30.713 Build targets in project: 224 00:01:30.713 00:01:30.713 DPDK 24.07.0-rc2 00:01:30.713 00:01:30.713 User defined options 00:01:30.713 libdir : lib 00:01:30.713 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.713 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:30.713 c_link_args : 00:01:30.713 enable_docs : false 00:01:30.713 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:30.713 enable_kmods : false 00:01:30.713 machine : native 00:01:30.713 tests : false 00:01:30.713 00:01:30.713 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.713 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:30.713 15:13:01 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:30.713 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:30.973 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.973 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.973 [3/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.973 [4/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:30.973 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.973 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.973 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:30.973 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.973 [9/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.973 [10/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.973 [11/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.973 [12/723] Linking static target lib/librte_kvargs.a 00:01:30.973 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.234 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.234 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.234 [16/723] Linking static target lib/librte_log.a 00:01:31.234 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:31.234 [18/723] Linking static target lib/librte_argparse.a 00:01:31.510 [19/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.783 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.783 [21/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.783 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:31.783 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.043 [24/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.043 [25/723] Linking target lib/librte_log.so.24.2 00:01:32.043 [26/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.043 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.043 [28/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.043 [29/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.043 [30/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.043 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.043 [32/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.043 [33/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.043 [34/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.043 [35/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.043 [36/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.043 [37/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.043 [38/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.043 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.043 [40/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.043 [41/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.043 [42/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:32.043 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.043 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.043 [45/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.043 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.043 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.043 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.043 [49/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:32.304 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.304 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.304 [52/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.304 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.304 [54/723] Linking target lib/librte_kvargs.so.24.2 00:01:32.304 [55/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.304 [56/723] Linking target lib/librte_argparse.so.24.2 00:01:32.304 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.304 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.304 [59/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.304 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.304 [61/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.304 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.561 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.561 [64/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:32.561 [65/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.561 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.561 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.561 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.821 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.821 [70/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.821 [71/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.821 [72/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.821 [73/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.080 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:33.080 [75/723] Linking static target lib/librte_pci.a 00:01:33.080 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:33.080 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:33.080 [78/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:33.080 [79/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:33.080 [80/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:33.080 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:33.080 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:33.344 [83/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.344 [84/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:33.344 [85/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:33.344 [86/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:33.344 [87/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:33.344 [88/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:33.344 [89/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:33.344 [90/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:33.344 [91/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.344 [92/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:33.344 [93/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:33.344 [94/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:33.344 [95/723] Linking static target lib/librte_ring.a 00:01:33.344 [96/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:33.344 [97/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:33.344 [98/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:33.344 [99/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.344 [100/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:33.344 [101/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:33.344 [102/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:33.344 [103/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:33.344 [104/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:33.344 [105/723] Linking static target lib/librte_meter.a 00:01:33.344 [106/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:33.608 [107/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:33.608 [108/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:33.608 [109/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.608 [110/723] Linking static target lib/librte_telemetry.a 00:01:33.608 [111/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:33.608 [112/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:33.608 [113/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:33.608 [114/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.608 [115/723] Linking static target lib/librte_net.a 00:01:33.868 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:33.868 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:33.868 [118/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.868 [119/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:33.868 [120/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.868 [121/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:33.868 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:33.868 [123/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:33.868 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:34.154 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:34.154 [126/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.154 [127/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:34.154 [128/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:34.154 [129/723] Linking static target lib/librte_mempool.a 00:01:34.154 [130/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:34.154 [131/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:34.154 [132/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:34.154 [133/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.154 [134/723] Linking static target lib/librte_eal.a 00:01:34.154 [135/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:34.413 [136/723] Linking target lib/librte_telemetry.so.24.2 00:01:34.413 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:34.413 [138/723] Linking static target lib/librte_cmdline.a 00:01:34.413 [139/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:34.413 [140/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.413 [141/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:34.413 [142/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:34.413 [143/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:34.414 [144/723] Linking static target lib/librte_cfgfile.a 00:01:34.677 [145/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:34.677 [146/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:34.677 [147/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:34.677 [148/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:34.677 [149/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:34.677 [150/723] Linking static target lib/librte_metrics.a 00:01:34.677 [151/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:34.677 [152/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:34.677 [153/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:34.677 [154/723] Linking static target lib/librte_rcu.a 00:01:34.677 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:34.938 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:34.938 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:34.938 [158/723] Linking static target lib/librte_bitratestats.a 00:01:34.938 [159/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:34.938 [160/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:34.938 [161/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:35.196 [162/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.196 [163/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:35.196 [164/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.196 [165/723] Linking static target lib/librte_mbuf.a 00:01:35.196 [166/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:35.196 [167/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.196 [168/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.196 [169/723] Linking static target lib/librte_timer.a 00:01:35.196 [170/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.196 [171/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.196 [172/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:35.196 [173/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.196 [174/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.464 [175/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:35.464 [176/723] Linking static target lib/librte_bbdev.a 00:01:35.464 [177/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:35.464 [178/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.464 [179/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:35.464 [180/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.723 [181/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.723 [182/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:35.723 [183/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.723 [184/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.723 [185/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.723 [186/723] Linking static target lib/librte_compressdev.a 00:01:35.723 [187/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.723 [188/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.723 [189/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:35.723 [190/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:35.986 [191/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.986 [192/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:35.986 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.245 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:36.245 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:36.509 [196/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:36.509 [197/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.509 [198/723] Linking static target lib/librte_distributor.a 00:01:36.509 [199/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.509 [200/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.509 [201/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:36.509 [202/723] Linking static target lib/librte_dmadev.a 00:01:36.509 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:36.509 [204/723] Linking static target lib/librte_bpf.a 00:01:36.770 [205/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.770 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:36.770 [207/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:36.770 [208/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:36.770 [209/723] Linking static target lib/librte_dispatcher.a 00:01:36.770 [210/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:36.770 [211/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:36.770 [212/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:36.770 [213/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:36.770 [214/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:36.770 [215/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.033 [216/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:37.033 [217/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.033 [218/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:37.033 [219/723] Linking static target lib/librte_gpudev.a 00:01:37.033 [220/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:37.033 [221/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.033 [222/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:37.033 [223/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:37.033 [224/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:37.033 [225/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:37.033 [226/723] Linking static target lib/librte_gro.a 00:01:37.033 [227/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.033 [228/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:37.033 [229/723] Linking static target lib/librte_jobstats.a 00:01:37.033 [230/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.033 [231/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:37.033 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:37.033 [233/723] Linking static target lib/librte_gso.a 00:01:37.295 [234/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:37.295 [235/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.295 [236/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:37.295 [237/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:37.556 [238/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.556 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:37.556 [240/723] Linking static target lib/librte_latencystats.a 00:01:37.556 [241/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.556 [242/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.556 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:37.556 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:37.556 [245/723] Linking static target lib/librte_ip_frag.a 00:01:37.556 [246/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.556 [247/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:37.821 [248/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:37.821 [249/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:37.821 [250/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:37.821 [251/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:37.821 [252/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:37.821 [253/723] Linking static target lib/librte_efd.a 00:01:37.821 [254/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:37.821 [255/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.821 [256/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:37.821 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:38.080 [258/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.080 [259/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:38.080 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.080 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.080 [262/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.080 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:38.080 [264/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:38.344 [265/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.344 [266/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.344 [267/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.344 [268/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.344 [269/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.344 [270/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:38.607 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:38.607 [272/723] Linking static target lib/librte_regexdev.a 00:01:38.607 [273/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:38.607 [274/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.607 [275/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:38.607 [276/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:38.607 [277/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:38.607 [278/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:38.607 [279/723] Linking static target lib/librte_pcapng.a 00:01:38.607 [280/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:38.607 [281/723] Linking static target lib/librte_rawdev.a 00:01:38.607 [282/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.871 [283/723] Linking static target lib/librte_power.a 00:01:38.871 [284/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:38.871 [285/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:38.871 [286/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:38.871 [287/723] Linking static target lib/librte_mldev.a 00:01:38.871 [288/723] Linking static target lib/librte_lpm.a 00:01:38.871 [289/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:38.871 [290/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:38.871 [291/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:38.871 [292/723] Linking static target lib/librte_stack.a 00:01:38.871 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:39.132 [294/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.132 [295/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:39.132 [296/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:39.132 [297/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:39.132 [298/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.132 [299/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.132 [300/723] Linking static target lib/acl/libavx2_tmp.a 00:01:39.132 [301/723] Linking static target lib/librte_cryptodev.a 00:01:39.132 [302/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.132 [303/723] Linking static target lib/librte_reorder.a 00:01:39.132 [304/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.132 [305/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.132 [306/723] Linking static target lib/librte_security.a 00:01:39.395 [307/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.395 [308/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.395 [309/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.395 [310/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.395 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.395 [312/723] Linking static target lib/librte_hash.a 00:01:39.395 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:39.658 [314/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:39.658 [315/723] Linking static target lib/librte_rib.a 00:01:39.658 [316/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:39.658 [317/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:39.658 [318/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.658 [319/723] Linking static target lib/acl/libavx512_tmp.a 00:01:39.658 [320/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:39.659 [321/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:39.659 [322/723] Linking static target lib/librte_acl.a 00:01:39.659 [323/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.659 [324/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.659 [325/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.659 [326/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.659 [327/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.659 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:39.659 [329/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:39.917 [330/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:39.917 [331/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.917 [332/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:39.917 [333/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:39.917 [334/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:39.917 [335/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:39.917 [336/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:39.917 [337/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:40.181 [338/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:40.181 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.181 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:40.444 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.444 [342/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.444 [343/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:40.444 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:41.014 [345/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:41.014 [346/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:41.014 [347/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:41.014 [348/723] Linking static target lib/librte_eventdev.a 00:01:41.014 [349/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:41.014 [350/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.014 [351/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:41.014 [352/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:41.014 [353/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:41.014 [354/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:41.014 [355/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.014 [356/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:41.014 [357/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.014 [358/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:41.274 [359/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:41.274 [360/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:41.274 [361/723] Linking static target lib/librte_member.a 00:01:41.274 [362/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:41.274 [363/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:41.274 [364/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.274 [365/723] Linking static target lib/librte_sched.a 00:01:41.274 [366/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:41.274 [367/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:41.274 [368/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:41.274 [369/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:41.274 [370/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:41.535 [371/723] Linking static target lib/librte_fib.a 00:01:41.535 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:41.535 [373/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:41.535 [374/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:41.535 [375/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:41.535 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:41.535 [377/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:41.797 [378/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:41.797 [379/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.797 [380/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.797 [381/723] Linking static target lib/librte_ethdev.a 00:01:41.797 [382/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:41.797 [383/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:41.797 [384/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:41.797 [385/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:41.797 [386/723] Linking static target lib/librte_ipsec.a 00:01:41.797 [387/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.057 [388/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.057 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:42.057 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.321 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:42.321 [392/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:42.321 [393/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:42.321 [394/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:42.321 [395/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:42.321 [396/723] Linking static target lib/librte_pdump.a 00:01:42.321 [397/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.321 [398/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:42.321 [399/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.321 [400/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:42.579 [401/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:42.579 [402/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:42.579 [403/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:42.579 [404/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:42.579 [405/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:42.579 [406/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:42.579 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:42.839 [408/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:42.839 [409/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:42.839 [410/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.839 [411/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:42.839 [412/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:42.839 [413/723] Linking static target lib/librte_pdcp.a 00:01:42.839 [414/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:42.839 [415/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:42.839 [416/723] Linking static target lib/librte_table.a 00:01:42.839 [417/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:42.839 [418/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:43.105 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:43.105 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:43.105 [421/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:43.369 [422/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:43.369 [423/723] Linking static target lib/librte_graph.a 00:01:43.369 [424/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.369 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:43.369 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:43.369 [427/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:43.635 [428/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:43.635 [429/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:43.635 [430/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:43.635 [431/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:43.635 [432/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:43.635 [433/723] Linking static target lib/librte_port.a 00:01:43.635 [434/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:43.635 [435/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:43.635 [436/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:43.635 [437/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:43.897 [438/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:43.897 [439/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:43.897 [440/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:44.160 [441/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.160 [442/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.160 [443/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:44.160 [444/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:44.160 [445/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.160 [446/723] Linking static target drivers/librte_bus_vdev.a 00:01:44.160 [447/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.160 [448/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:44.430 [449/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:44.430 [450/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.430 [451/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:44.430 [452/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:44.430 [453/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:44.430 [454/723] Linking static target lib/librte_node.a 00:01:44.430 [455/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:44.430 [456/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:44.691 [457/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:44.691 [458/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.691 [459/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.691 [460/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.691 [461/723] Linking static target drivers/librte_bus_pci.a 00:01:44.691 [462/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:44.691 [463/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:44.691 [464/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:44.691 [465/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:44.691 [466/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:44.691 [467/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:44.691 [468/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:44.954 [469/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:44.954 [470/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:44.954 [471/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:44.954 [472/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:44.954 [473/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.954 [474/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:45.236 [475/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:45.236 [476/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:45.236 [477/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:45.236 [478/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.236 [479/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:45.236 [480/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:45.237 [481/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:45.237 [482/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:45.519 [483/723] Linking target lib/librte_eal.so.24.2 00:01:45.519 [484/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:45.519 [485/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.519 [486/723] Linking static target drivers/librte_mempool_ring.a 00:01:45.519 [487/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:45.519 [488/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:45.519 [489/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.519 [490/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:45.519 [491/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:45.519 [492/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:45.792 [493/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:45.792 [494/723] Linking target lib/librte_ring.so.24.2 00:01:45.792 [495/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:45.792 [496/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:45.792 [497/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:45.792 [498/723] Linking target lib/librte_meter.so.24.2 00:01:45.793 [499/723] Linking target lib/librte_pci.so.24.2 00:01:45.793 [500/723] Linking target lib/librte_timer.so.24.2 00:01:45.793 [501/723] Linking target lib/librte_cfgfile.so.24.2 00:01:45.793 [502/723] Linking target lib/librte_acl.so.24.2 00:01:45.793 [503/723] Linking target lib/librte_dmadev.so.24.2 00:01:45.793 [504/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:45.793 [505/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:45.793 [506/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:45.793 [507/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:45.793 [508/723] Linking target lib/librte_jobstats.so.24.2 00:01:46.052 [509/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:46.052 [510/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:46.052 [511/723] Linking target lib/librte_rcu.so.24.2 00:01:46.052 [512/723] Linking target lib/librte_rawdev.so.24.2 00:01:46.052 [513/723] Linking target lib/librte_stack.so.24.2 00:01:46.052 [514/723] Linking target lib/librte_mempool.so.24.2 00:01:46.052 [515/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:46.052 [516/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:46.052 [517/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:46.052 [518/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:46.052 [519/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:46.052 [520/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:46.052 [521/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:46.052 [522/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:46.313 [523/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:46.313 [524/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:46.313 [525/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:46.313 [526/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:46.313 [527/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:46.313 [528/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:46.313 [529/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:46.313 [530/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:46.313 [531/723] Linking target lib/librte_mbuf.so.24.2 00:01:46.313 [532/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:46.313 [533/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:46.313 [534/723] Linking target lib/librte_rib.so.24.2 00:01:46.313 [535/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:46.575 [536/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:46.575 [537/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:46.575 [538/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:46.575 [539/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:46.575 [540/723] Linking target lib/librte_fib.so.24.2 00:01:46.575 [541/723] Linking target lib/librte_net.so.24.2 00:01:46.575 [542/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:46.575 [543/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:46.575 [544/723] Linking target lib/librte_bbdev.so.24.2 00:01:46.837 [545/723] Linking target lib/librte_compressdev.so.24.2 00:01:46.837 [546/723] Linking target lib/librte_cryptodev.so.24.2 00:01:46.837 [547/723] Linking target lib/librte_distributor.so.24.2 00:01:46.837 [548/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:46.837 [549/723] Linking target lib/librte_gpudev.so.24.2 00:01:46.837 [550/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:46.837 [551/723] Linking target lib/librte_regexdev.so.24.2 00:01:46.837 [552/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:46.837 [553/723] Linking target lib/librte_mldev.so.24.2 00:01:46.837 [554/723] Linking target lib/librte_reorder.so.24.2 00:01:46.837 [555/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:46.837 [556/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:46.837 [557/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:46.837 [558/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:46.837 [559/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:46.837 [560/723] Linking target lib/librte_sched.so.24.2 00:01:46.837 [561/723] Linking target lib/librte_cmdline.so.24.2 00:01:46.837 [562/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:46.837 [563/723] Linking target lib/librte_hash.so.24.2 00:01:46.837 [564/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:47.104 [565/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:47.104 [566/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:47.104 [567/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:47.104 [568/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:47.104 [569/723] Linking target lib/librte_security.so.24.2 00:01:47.104 [570/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:47.104 [571/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:47.104 [572/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:47.104 [573/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:47.104 [574/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:47.104 [575/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:47.104 [576/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:47.104 [577/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:47.104 [578/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:47.366 [579/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:47.366 [580/723] Linking target lib/librte_efd.so.24.2 00:01:47.366 [581/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:47.366 [582/723] Linking target lib/librte_lpm.so.24.2 00:01:47.366 [583/723] Linking target lib/librte_member.so.24.2 00:01:47.366 [584/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:47.366 [585/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:47.366 [586/723] Linking target lib/librte_ipsec.so.24.2 00:01:47.366 [587/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:47.366 [588/723] Linking target lib/librte_pdcp.so.24.2 00:01:47.366 [589/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:47.629 [590/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:47.629 [591/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:47.629 [592/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:47.629 [593/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:47.629 [594/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:47.629 [595/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:47.629 [596/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:47.888 [597/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:47.888 [598/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:48.151 [599/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:48.151 [600/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:48.151 [601/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:48.151 [602/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:48.151 [603/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:48.151 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:48.151 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:48.414 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:48.414 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:48.414 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:48.414 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:48.414 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:48.414 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:48.414 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:48.414 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:48.674 [614/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:48.674 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:48.674 [616/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:48.674 [617/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:48.674 [618/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:48.674 [619/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:48.935 [620/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:48.935 [621/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:48.935 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:48.935 [623/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:49.193 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:49.193 [625/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:49.193 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:49.451 [627/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:49.451 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:49.451 [629/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:49.451 [630/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:49.451 [631/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:49.451 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:49.451 [633/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:49.451 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:49.451 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:49.451 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:49.710 [637/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:49.710 [638/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.710 [639/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:49.710 [640/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:49.710 [641/723] Linking target lib/librte_ethdev.so.24.2 00:01:49.710 [642/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:49.969 [643/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:49.969 [644/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:49.969 [645/723] Linking target lib/librte_metrics.so.24.2 00:01:49.969 [646/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:49.969 [647/723] Linking target lib/librte_pcapng.so.24.2 00:01:49.969 [648/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:49.969 [649/723] Linking target lib/librte_gso.so.24.2 00:01:49.969 [650/723] Linking target lib/librte_gro.so.24.2 00:01:49.969 [651/723] Linking target lib/librte_ip_frag.so.24.2 00:01:49.969 [652/723] Linking target lib/librte_bpf.so.24.2 00:01:49.969 [653/723] Linking target lib/librte_power.so.24.2 00:01:49.969 [654/723] Linking target lib/librte_eventdev.so.24.2 00:01:49.969 [655/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:49.969 [656/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:49.969 [657/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:49.969 [658/723] Linking target lib/librte_bitratestats.so.24.2 00:01:49.969 [659/723] Linking target lib/librte_latencystats.so.24.2 00:01:49.969 [660/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:49.969 [661/723] Linking target lib/librte_graph.so.24.2 00:01:50.228 [662/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:50.228 [663/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:50.228 [664/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:50.228 [665/723] Linking target lib/librte_pdump.so.24.2 00:01:50.228 [666/723] Linking target lib/librte_dispatcher.so.24.2 00:01:50.228 [667/723] Linking target lib/librte_port.so.24.2 00:01:50.228 [668/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:50.228 [669/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:50.228 [670/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:50.228 [671/723] Linking target lib/librte_node.so.24.2 00:01:50.228 [672/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:50.485 [673/723] Linking target lib/librte_table.so.24.2 00:01:50.485 [674/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:50.485 [675/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:50.485 [676/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:50.485 [677/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:50.743 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:51.002 [679/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.002 [680/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:51.002 [681/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:51.002 [682/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:51.002 [683/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:51.569 [684/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:51.569 [685/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:51.569 [686/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:51.569 [687/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:51.569 [688/723] Linking static target drivers/librte_net_i40e.a 00:01:51.570 [689/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:52.136 [690/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.136 [691/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:52.136 [692/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:53.069 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:53.069 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:53.635 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:01.744 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:01.744 [697/723] Linking static target lib/librte_pipeline.a 00:02:02.003 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.003 [699/723] Linking static target lib/librte_vhost.a 00:02:02.568 [700/723] Linking target app/dpdk-test-acl 00:02:02.568 [701/723] Linking target app/dpdk-test-dma-perf 00:02:02.568 [702/723] Linking target app/dpdk-test-sad 00:02:02.568 [703/723] Linking target app/dpdk-test-regex 00:02:02.568 [704/723] Linking target app/dpdk-test-cmdline 00:02:02.568 [705/723] Linking target app/dpdk-test-bbdev 00:02:02.568 [706/723] Linking target app/dpdk-pdump 00:02:02.568 [707/723] Linking target app/dpdk-test-fib 00:02:02.568 [708/723] Linking target app/dpdk-test-compress-perf 00:02:02.568 [709/723] Linking target app/dpdk-test-mldev 00:02:02.568 [710/723] Linking target app/dpdk-graph 00:02:02.569 [711/723] Linking target app/dpdk-test-pipeline 00:02:02.569 [712/723] Linking target app/dpdk-dumpcap 00:02:02.569 [713/723] Linking target app/dpdk-proc-info 00:02:02.569 [714/723] Linking target app/dpdk-test-security-perf 00:02:02.569 [715/723] Linking target app/dpdk-test-crypto-perf 00:02:02.569 [716/723] Linking target app/dpdk-test-gpudev 00:02:02.569 [717/723] Linking target app/dpdk-test-flow-perf 00:02:02.569 [718/723] Linking target app/dpdk-test-eventdev 00:02:02.569 [719/723] Linking target app/dpdk-testpmd 00:02:03.134 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.134 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:04.075 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.075 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:04.075 15:13:34 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:04.075 15:13:34 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:04.075 15:13:34 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:04.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:04.075 [0/1] Installing files. 00:02:04.336 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:04.336 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:04.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:04.342 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.342 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:04.923 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:04.923 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:04.923 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.923 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:04.923 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.923 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.926 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.927 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.928 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:04.929 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:04.929 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:04.929 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:04.929 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:04.929 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:04.929 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:04.929 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:04.929 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:04.929 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:04.929 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:04.929 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:04.929 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:04.929 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:04.929 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:04.929 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:04.929 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:04.929 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:04.929 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:04.929 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:04.929 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:04.929 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:04.929 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:04.929 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:04.929 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:04.929 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:04.929 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:04.929 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:04.929 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:04.929 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:04.929 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:04.929 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:04.929 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:04.929 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:04.929 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:04.929 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:04.929 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:04.929 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:04.929 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:04.929 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:04.929 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:04.929 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:04.929 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:04.929 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:04.929 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:04.929 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:04.929 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:04.929 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:04.929 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:04.929 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:04.929 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:04.929 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:04.929 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:04.929 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:04.929 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:04.929 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:04.929 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:04.929 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:04.929 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:04.929 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:04.929 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:04.929 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:04.929 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:04.929 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:04.929 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:04.929 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:04.929 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:04.929 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:04.929 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:04.929 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:04.929 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:04.929 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:04.930 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:04.930 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:04.930 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:04.930 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:04.930 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:04.930 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:04.930 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:04.930 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:04.930 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:04.930 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:04.930 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:04.930 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:04.930 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:04.930 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:04.930 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:04.930 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:04.930 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:04.930 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:04.930 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:04.930 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:04.930 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:04.930 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:04.930 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:04.930 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:04.930 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:04.930 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:04.930 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:04.930 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:04.930 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:04.930 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:04.930 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:04.930 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:04.930 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:04.930 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:04.930 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:04.930 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:04.930 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:04.930 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:04.930 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:04.930 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:04.930 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:04.930 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:04.930 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:04.930 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:04.930 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:04.930 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:04.930 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:04.930 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:04.930 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:04.930 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:04.930 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:04.930 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:04.930 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:04.930 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:04.930 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:04.930 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:04.930 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:04.930 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:04.930 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:04.930 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:04.930 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:04.930 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:04.930 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:04.930 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:04.930 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:04.930 15:13:35 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:04.930 15:13:35 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.930 00:02:04.930 real 0m39.844s 00:02:04.930 user 13m55.070s 00:02:04.930 sys 2m1.494s 00:02:04.930 15:13:35 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:04.930 15:13:35 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:04.930 ************************************ 00:02:04.930 END TEST build_native_dpdk 00:02:04.930 ************************************ 00:02:04.930 15:13:35 -- common/autotest_common.sh@1142 -- $ return 0 00:02:04.930 15:13:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.930 15:13:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.930 15:13:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.930 15:13:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.930 15:13:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:04.930 15:13:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:04.930 15:13:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:04.930 15:13:35 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:04.930 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:05.188 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:05.188 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:05.188 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:05.446 Using 'verbs' RDMA provider 00:02:15.978 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:25.942 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:25.942 Creating mk/config.mk...done. 00:02:25.942 Creating mk/cc.flags.mk...done. 00:02:25.942 Type 'make' to build. 00:02:25.942 15:13:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:25.942 15:13:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:25.942 15:13:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.942 15:13:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.942 ************************************ 00:02:25.942 START TEST make 00:02:25.942 ************************************ 00:02:25.942 15:13:55 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:25.942 make[1]: Nothing to be done for 'all'. 00:02:26.513 The Meson build system 00:02:26.513 Version: 1.3.1 00:02:26.513 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:26.513 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.513 Build type: native build 00:02:26.513 Project name: libvfio-user 00:02:26.513 Project version: 0.0.1 00:02:26.513 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:26.513 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:26.513 Host machine cpu family: x86_64 00:02:26.513 Host machine cpu: x86_64 00:02:26.513 Run-time dependency threads found: YES 00:02:26.513 Library dl found: YES 00:02:26.513 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:26.513 Run-time dependency json-c found: YES 0.17 00:02:26.513 Run-time dependency cmocka found: YES 1.1.7 00:02:26.513 Program pytest-3 found: NO 00:02:26.513 Program flake8 found: NO 00:02:26.513 Program misspell-fixer found: NO 00:02:26.513 Program restructuredtext-lint found: NO 00:02:26.513 Program valgrind found: YES (/usr/bin/valgrind) 00:02:26.513 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.513 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.513 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.513 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:26.513 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:26.513 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:26.513 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:26.513 Build targets in project: 8 00:02:26.513 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:26.513 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:26.513 00:02:26.513 libvfio-user 0.0.1 00:02:26.513 00:02:26.513 User defined options 00:02:26.513 buildtype : debug 00:02:26.513 default_library: shared 00:02:26.513 libdir : /usr/local/lib 00:02:26.513 00:02:26.513 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.087 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.351 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:27.351 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:27.351 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:27.351 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:27.351 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:27.351 [6/37] Compiling C object samples/null.p/null.c.o 00:02:27.351 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:27.351 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:27.351 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:27.351 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:27.351 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:27.351 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:27.351 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:27.614 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:27.614 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:27.614 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:27.614 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:27.614 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:27.614 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:27.614 [20/37] Compiling C object samples/server.p/server.c.o 00:02:27.614 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:27.614 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:27.614 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:27.614 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:27.614 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:27.614 [26/37] Compiling C object samples/client.p/client.c.o 00:02:27.614 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:27.614 [28/37] Linking target samples/client 00:02:27.877 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:27.877 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:27.877 [31/37] Linking target test/unit_tests 00:02:27.877 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:28.139 [33/37] Linking target samples/server 00:02:28.139 [34/37] Linking target samples/gpio-pci-idio-16 00:02:28.139 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:28.139 [36/37] Linking target samples/lspci 00:02:28.139 [37/37] Linking target samples/null 00:02:28.139 INFO: autodetecting backend as ninja 00:02:28.139 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:28.139 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:29.082 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:29.082 ninja: no work to do. 00:02:41.341 CC lib/ut/ut.o 00:02:41.341 CC lib/log/log.o 00:02:41.341 CC lib/log/log_flags.o 00:02:41.341 CC lib/log/log_deprecated.o 00:02:41.341 CC lib/ut_mock/mock.o 00:02:41.341 LIB libspdk_ut.a 00:02:41.341 LIB libspdk_log.a 00:02:41.341 LIB libspdk_ut_mock.a 00:02:41.341 SO libspdk_ut.so.2.0 00:02:41.341 SO libspdk_ut_mock.so.6.0 00:02:41.341 SO libspdk_log.so.7.0 00:02:41.341 SYMLINK libspdk_ut_mock.so 00:02:41.341 SYMLINK libspdk_ut.so 00:02:41.341 SYMLINK libspdk_log.so 00:02:41.341 CC lib/ioat/ioat.o 00:02:41.341 CXX lib/trace_parser/trace.o 00:02:41.341 CC lib/dma/dma.o 00:02:41.341 CC lib/util/base64.o 00:02:41.341 CC lib/util/bit_array.o 00:02:41.341 CC lib/util/cpuset.o 00:02:41.341 CC lib/util/crc16.o 00:02:41.341 CC lib/util/crc32.o 00:02:41.341 CC lib/util/crc32c.o 00:02:41.341 CC lib/util/crc32_ieee.o 00:02:41.341 CC lib/util/crc64.o 00:02:41.341 CC lib/util/dif.o 00:02:41.341 CC lib/util/fd.o 00:02:41.341 CC lib/util/file.o 00:02:41.341 CC lib/util/hexlify.o 00:02:41.341 CC lib/util/iov.o 00:02:41.341 CC lib/util/math.o 00:02:41.341 CC lib/util/pipe.o 00:02:41.341 CC lib/util/strerror_tls.o 00:02:41.341 CC lib/util/string.o 00:02:41.341 CC lib/util/uuid.o 00:02:41.341 CC lib/util/fd_group.o 00:02:41.341 CC lib/util/xor.o 00:02:41.341 CC lib/util/zipf.o 00:02:41.341 CC lib/vfio_user/host/vfio_user_pci.o 00:02:41.341 CC lib/vfio_user/host/vfio_user.o 00:02:41.341 LIB libspdk_dma.a 00:02:41.341 SO libspdk_dma.so.4.0 00:02:41.341 SYMLINK libspdk_dma.so 00:02:41.341 LIB libspdk_ioat.a 00:02:41.341 SO libspdk_ioat.so.7.0 00:02:41.341 LIB libspdk_vfio_user.a 00:02:41.341 SYMLINK libspdk_ioat.so 00:02:41.341 SO libspdk_vfio_user.so.5.0 00:02:41.341 SYMLINK libspdk_vfio_user.so 00:02:41.341 LIB libspdk_util.a 00:02:41.342 SO libspdk_util.so.9.1 00:02:41.342 SYMLINK libspdk_util.so 00:02:41.599 LIB libspdk_trace_parser.a 00:02:41.599 SO libspdk_trace_parser.so.5.0 00:02:41.599 CC lib/idxd/idxd.o 00:02:41.599 CC lib/conf/conf.o 00:02:41.599 CC lib/json/json_parse.o 00:02:41.599 CC lib/vmd/vmd.o 00:02:41.599 CC lib/env_dpdk/env.o 00:02:41.599 CC lib/idxd/idxd_user.o 00:02:41.599 CC lib/env_dpdk/memory.o 00:02:41.600 CC lib/json/json_util.o 00:02:41.600 CC lib/idxd/idxd_kernel.o 00:02:41.600 CC lib/vmd/led.o 00:02:41.600 CC lib/env_dpdk/pci.o 00:02:41.600 CC lib/json/json_write.o 00:02:41.600 CC lib/env_dpdk/init.o 00:02:41.600 CC lib/rdma_provider/common.o 00:02:41.600 CC lib/env_dpdk/threads.o 00:02:41.600 CC lib/rdma_utils/rdma_utils.o 00:02:41.600 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:41.600 CC lib/env_dpdk/pci_ioat.o 00:02:41.600 CC lib/env_dpdk/pci_virtio.o 00:02:41.600 CC lib/env_dpdk/pci_vmd.o 00:02:41.600 CC lib/env_dpdk/pci_idxd.o 00:02:41.600 CC lib/env_dpdk/pci_event.o 00:02:41.600 CC lib/env_dpdk/sigbus_handler.o 00:02:41.600 CC lib/env_dpdk/pci_dpdk.o 00:02:41.600 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.600 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:41.600 SYMLINK libspdk_trace_parser.so 00:02:41.858 LIB libspdk_conf.a 00:02:41.858 SO libspdk_conf.so.6.0 00:02:41.858 LIB libspdk_rdma_utils.a 00:02:41.858 LIB libspdk_rdma_provider.a 00:02:41.858 SO libspdk_rdma_utils.so.1.0 00:02:41.858 LIB libspdk_json.a 00:02:41.858 SYMLINK libspdk_conf.so 00:02:41.858 SO libspdk_rdma_provider.so.6.0 00:02:41.858 SO libspdk_json.so.6.0 00:02:41.858 SYMLINK libspdk_rdma_utils.so 00:02:42.116 SYMLINK libspdk_rdma_provider.so 00:02:42.116 SYMLINK libspdk_json.so 00:02:42.116 LIB libspdk_idxd.a 00:02:42.116 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.116 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.116 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.116 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.116 SO libspdk_idxd.so.12.0 00:02:42.373 SYMLINK libspdk_idxd.so 00:02:42.373 LIB libspdk_vmd.a 00:02:42.373 SO libspdk_vmd.so.6.0 00:02:42.373 SYMLINK libspdk_vmd.so 00:02:42.373 LIB libspdk_jsonrpc.a 00:02:42.373 SO libspdk_jsonrpc.so.6.0 00:02:42.630 SYMLINK libspdk_jsonrpc.so 00:02:42.630 CC lib/rpc/rpc.o 00:02:42.889 LIB libspdk_rpc.a 00:02:42.889 SO libspdk_rpc.so.6.0 00:02:42.889 SYMLINK libspdk_rpc.so 00:02:43.147 LIB libspdk_env_dpdk.a 00:02:43.147 SO libspdk_env_dpdk.so.14.1 00:02:43.147 CC lib/keyring/keyring.o 00:02:43.147 CC lib/keyring/keyring_rpc.o 00:02:43.147 CC lib/trace/trace.o 00:02:43.147 CC lib/trace/trace_flags.o 00:02:43.147 CC lib/trace/trace_rpc.o 00:02:43.147 CC lib/notify/notify.o 00:02:43.147 CC lib/notify/notify_rpc.o 00:02:43.405 SYMLINK libspdk_env_dpdk.so 00:02:43.405 LIB libspdk_notify.a 00:02:43.405 SO libspdk_notify.so.6.0 00:02:43.405 LIB libspdk_keyring.a 00:02:43.405 SYMLINK libspdk_notify.so 00:02:43.405 SO libspdk_keyring.so.1.0 00:02:43.405 LIB libspdk_trace.a 00:02:43.405 SO libspdk_trace.so.10.0 00:02:43.405 SYMLINK libspdk_keyring.so 00:02:43.405 SYMLINK libspdk_trace.so 00:02:43.663 CC lib/thread/thread.o 00:02:43.663 CC lib/thread/iobuf.o 00:02:43.663 CC lib/sock/sock.o 00:02:43.663 CC lib/sock/sock_rpc.o 00:02:44.228 LIB libspdk_sock.a 00:02:44.228 SO libspdk_sock.so.10.0 00:02:44.228 SYMLINK libspdk_sock.so 00:02:44.228 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:44.228 CC lib/nvme/nvme_ctrlr.o 00:02:44.228 CC lib/nvme/nvme_fabric.o 00:02:44.228 CC lib/nvme/nvme_ns_cmd.o 00:02:44.228 CC lib/nvme/nvme_ns.o 00:02:44.228 CC lib/nvme/nvme_pcie_common.o 00:02:44.228 CC lib/nvme/nvme_pcie.o 00:02:44.228 CC lib/nvme/nvme_qpair.o 00:02:44.228 CC lib/nvme/nvme.o 00:02:44.228 CC lib/nvme/nvme_quirks.o 00:02:44.229 CC lib/nvme/nvme_transport.o 00:02:44.229 CC lib/nvme/nvme_discovery.o 00:02:44.229 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:44.229 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:44.229 CC lib/nvme/nvme_tcp.o 00:02:44.229 CC lib/nvme/nvme_opal.o 00:02:44.229 CC lib/nvme/nvme_io_msg.o 00:02:44.229 CC lib/nvme/nvme_poll_group.o 00:02:44.229 CC lib/nvme/nvme_zns.o 00:02:44.229 CC lib/nvme/nvme_stubs.o 00:02:44.229 CC lib/nvme/nvme_auth.o 00:02:44.229 CC lib/nvme/nvme_cuse.o 00:02:44.229 CC lib/nvme/nvme_vfio_user.o 00:02:44.229 CC lib/nvme/nvme_rdma.o 00:02:45.601 LIB libspdk_thread.a 00:02:45.601 SO libspdk_thread.so.10.1 00:02:45.601 SYMLINK libspdk_thread.so 00:02:45.601 CC lib/accel/accel.o 00:02:45.601 CC lib/vfu_tgt/tgt_endpoint.o 00:02:45.601 CC lib/blob/blobstore.o 00:02:45.601 CC lib/init/json_config.o 00:02:45.601 CC lib/virtio/virtio.o 00:02:45.601 CC lib/vfu_tgt/tgt_rpc.o 00:02:45.601 CC lib/accel/accel_rpc.o 00:02:45.601 CC lib/blob/request.o 00:02:45.601 CC lib/init/subsystem.o 00:02:45.601 CC lib/accel/accel_sw.o 00:02:45.601 CC lib/blob/zeroes.o 00:02:45.601 CC lib/init/subsystem_rpc.o 00:02:45.601 CC lib/virtio/virtio_vhost_user.o 00:02:45.601 CC lib/init/rpc.o 00:02:45.601 CC lib/virtio/virtio_vfio_user.o 00:02:45.601 CC lib/blob/blob_bs_dev.o 00:02:45.601 CC lib/virtio/virtio_pci.o 00:02:45.859 LIB libspdk_init.a 00:02:45.859 SO libspdk_init.so.5.0 00:02:45.859 LIB libspdk_virtio.a 00:02:45.859 LIB libspdk_vfu_tgt.a 00:02:45.859 SYMLINK libspdk_init.so 00:02:46.117 SO libspdk_vfu_tgt.so.3.0 00:02:46.117 SO libspdk_virtio.so.7.0 00:02:46.117 SYMLINK libspdk_vfu_tgt.so 00:02:46.117 SYMLINK libspdk_virtio.so 00:02:46.117 CC lib/event/app.o 00:02:46.117 CC lib/event/reactor.o 00:02:46.117 CC lib/event/log_rpc.o 00:02:46.117 CC lib/event/app_rpc.o 00:02:46.117 CC lib/event/scheduler_static.o 00:02:46.682 LIB libspdk_event.a 00:02:46.682 SO libspdk_event.so.14.0 00:02:46.682 LIB libspdk_accel.a 00:02:46.682 SYMLINK libspdk_event.so 00:02:46.682 SO libspdk_accel.so.15.1 00:02:46.682 LIB libspdk_nvme.a 00:02:46.682 SYMLINK libspdk_accel.so 00:02:46.940 SO libspdk_nvme.so.13.1 00:02:46.940 CC lib/bdev/bdev.o 00:02:46.940 CC lib/bdev/bdev_rpc.o 00:02:46.940 CC lib/bdev/bdev_zone.o 00:02:46.940 CC lib/bdev/part.o 00:02:46.940 CC lib/bdev/scsi_nvme.o 00:02:47.197 SYMLINK libspdk_nvme.so 00:02:48.571 LIB libspdk_blob.a 00:02:48.571 SO libspdk_blob.so.11.0 00:02:48.571 SYMLINK libspdk_blob.so 00:02:48.829 CC lib/blobfs/blobfs.o 00:02:48.829 CC lib/blobfs/tree.o 00:02:48.829 CC lib/lvol/lvol.o 00:02:49.395 LIB libspdk_bdev.a 00:02:49.395 SO libspdk_bdev.so.15.1 00:02:49.663 SYMLINK libspdk_bdev.so 00:02:49.663 LIB libspdk_blobfs.a 00:02:49.663 SO libspdk_blobfs.so.10.0 00:02:49.663 CC lib/scsi/dev.o 00:02:49.663 CC lib/scsi/lun.o 00:02:49.663 CC lib/scsi/port.o 00:02:49.663 CC lib/scsi/scsi.o 00:02:49.663 CC lib/nbd/nbd.o 00:02:49.663 CC lib/scsi/scsi_bdev.o 00:02:49.663 CC lib/ublk/ublk.o 00:02:49.663 CC lib/nvmf/ctrlr.o 00:02:49.663 CC lib/nbd/nbd_rpc.o 00:02:49.663 CC lib/scsi/scsi_pr.o 00:02:49.663 CC lib/ftl/ftl_core.o 00:02:49.663 CC lib/nvmf/ctrlr_discovery.o 00:02:49.663 CC lib/ublk/ublk_rpc.o 00:02:49.663 CC lib/scsi/scsi_rpc.o 00:02:49.663 CC lib/ftl/ftl_init.o 00:02:49.663 CC lib/nvmf/ctrlr_bdev.o 00:02:49.663 CC lib/scsi/task.o 00:02:49.663 CC lib/ftl/ftl_layout.o 00:02:49.663 CC lib/nvmf/subsystem.o 00:02:49.663 CC lib/ftl/ftl_debug.o 00:02:49.663 CC lib/ftl/ftl_io.o 00:02:49.663 CC lib/nvmf/nvmf.o 00:02:49.663 CC lib/nvmf/nvmf_rpc.o 00:02:49.663 CC lib/nvmf/transport.o 00:02:49.663 CC lib/ftl/ftl_sb.o 00:02:49.663 CC lib/nvmf/tcp.o 00:02:49.663 CC lib/ftl/ftl_l2p.o 00:02:49.663 CC lib/nvmf/stubs.o 00:02:49.663 CC lib/ftl/ftl_l2p_flat.o 00:02:49.663 CC lib/nvmf/mdns_server.o 00:02:49.663 CC lib/ftl/ftl_nv_cache.o 00:02:49.663 CC lib/nvmf/vfio_user.o 00:02:49.663 CC lib/nvmf/rdma.o 00:02:49.663 CC lib/ftl/ftl_band.o 00:02:49.663 CC lib/ftl/ftl_band_ops.o 00:02:49.663 CC lib/nvmf/auth.o 00:02:49.663 CC lib/ftl/ftl_writer.o 00:02:49.663 CC lib/ftl/ftl_rq.o 00:02:49.663 CC lib/ftl/ftl_reloc.o 00:02:49.663 CC lib/ftl/ftl_l2p_cache.o 00:02:49.663 CC lib/ftl/ftl_p2l.o 00:02:49.663 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.663 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.663 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.663 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.663 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.663 SYMLINK libspdk_blobfs.so 00:02:49.663 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.663 LIB libspdk_lvol.a 00:02:49.923 SO libspdk_lvol.so.10.0 00:02:49.923 SYMLINK libspdk_lvol.so 00:02:49.923 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:50.182 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:50.182 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:50.182 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:50.182 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:50.182 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:50.182 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:50.182 CC lib/ftl/utils/ftl_conf.o 00:02:50.182 CC lib/ftl/utils/ftl_md.o 00:02:50.182 CC lib/ftl/utils/ftl_mempool.o 00:02:50.182 CC lib/ftl/utils/ftl_bitmap.o 00:02:50.182 CC lib/ftl/utils/ftl_property.o 00:02:50.182 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:50.182 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:50.182 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:50.182 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:50.182 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:50.182 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:50.182 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:50.441 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:50.441 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:50.441 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:50.441 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:50.441 CC lib/ftl/base/ftl_base_dev.o 00:02:50.441 CC lib/ftl/base/ftl_base_bdev.o 00:02:50.441 CC lib/ftl/ftl_trace.o 00:02:50.441 LIB libspdk_nbd.a 00:02:50.441 SO libspdk_nbd.so.7.0 00:02:50.699 SYMLINK libspdk_nbd.so 00:02:50.699 LIB libspdk_scsi.a 00:02:50.699 SO libspdk_scsi.so.9.0 00:02:50.699 SYMLINK libspdk_scsi.so 00:02:50.699 LIB libspdk_ublk.a 00:02:50.958 SO libspdk_ublk.so.3.0 00:02:50.958 SYMLINK libspdk_ublk.so 00:02:50.958 CC lib/vhost/vhost.o 00:02:50.958 CC lib/iscsi/conn.o 00:02:50.958 CC lib/iscsi/init_grp.o 00:02:50.958 CC lib/vhost/vhost_rpc.o 00:02:50.958 CC lib/vhost/vhost_scsi.o 00:02:50.958 CC lib/iscsi/iscsi.o 00:02:50.958 CC lib/vhost/vhost_blk.o 00:02:50.958 CC lib/iscsi/md5.o 00:02:50.958 CC lib/vhost/rte_vhost_user.o 00:02:50.958 CC lib/iscsi/param.o 00:02:50.958 CC lib/iscsi/portal_grp.o 00:02:50.958 CC lib/iscsi/tgt_node.o 00:02:50.958 CC lib/iscsi/iscsi_subsystem.o 00:02:50.958 CC lib/iscsi/iscsi_rpc.o 00:02:50.958 CC lib/iscsi/task.o 00:02:51.218 LIB libspdk_ftl.a 00:02:51.475 SO libspdk_ftl.so.9.0 00:02:51.733 SYMLINK libspdk_ftl.so 00:02:52.297 LIB libspdk_vhost.a 00:02:52.297 SO libspdk_vhost.so.8.0 00:02:52.297 LIB libspdk_nvmf.a 00:02:52.297 SYMLINK libspdk_vhost.so 00:02:52.297 SO libspdk_nvmf.so.18.1 00:02:52.297 LIB libspdk_iscsi.a 00:02:52.554 SO libspdk_iscsi.so.8.0 00:02:52.554 SYMLINK libspdk_nvmf.so 00:02:52.554 SYMLINK libspdk_iscsi.so 00:02:52.829 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.829 CC module/vfu_device/vfu_virtio.o 00:02:52.829 CC module/vfu_device/vfu_virtio_blk.o 00:02:52.829 CC module/vfu_device/vfu_virtio_scsi.o 00:02:52.829 CC module/vfu_device/vfu_virtio_rpc.o 00:02:53.085 CC module/keyring/file/keyring.o 00:02:53.085 CC module/blob/bdev/blob_bdev.o 00:02:53.085 CC module/accel/dsa/accel_dsa.o 00:02:53.085 CC module/accel/iaa/accel_iaa.o 00:02:53.085 CC module/keyring/file/keyring_rpc.o 00:02:53.085 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.086 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.086 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:53.086 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.086 CC module/accel/error/accel_error.o 00:02:53.086 CC module/accel/error/accel_error_rpc.o 00:02:53.086 CC module/accel/ioat/accel_ioat.o 00:02:53.086 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.086 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.086 CC module/keyring/linux/keyring.o 00:02:53.086 CC module/sock/posix/posix.o 00:02:53.086 CC module/keyring/linux/keyring_rpc.o 00:02:53.086 LIB libspdk_env_dpdk_rpc.a 00:02:53.086 SO libspdk_env_dpdk_rpc.so.6.0 00:02:53.086 SYMLINK libspdk_env_dpdk_rpc.so 00:02:53.086 LIB libspdk_keyring_linux.a 00:02:53.086 LIB libspdk_keyring_file.a 00:02:53.086 LIB libspdk_scheduler_gscheduler.a 00:02:53.086 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.086 SO libspdk_keyring_linux.so.1.0 00:02:53.086 SO libspdk_keyring_file.so.1.0 00:02:53.086 LIB libspdk_accel_error.a 00:02:53.086 SO libspdk_scheduler_gscheduler.so.4.0 00:02:53.086 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:53.086 LIB libspdk_accel_ioat.a 00:02:53.086 LIB libspdk_scheduler_dynamic.a 00:02:53.086 LIB libspdk_accel_iaa.a 00:02:53.086 SO libspdk_accel_error.so.2.0 00:02:53.343 SO libspdk_accel_ioat.so.6.0 00:02:53.343 SYMLINK libspdk_keyring_file.so 00:02:53.343 SYMLINK libspdk_keyring_linux.so 00:02:53.343 SO libspdk_scheduler_dynamic.so.4.0 00:02:53.343 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.343 SO libspdk_accel_iaa.so.3.0 00:02:53.343 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.343 LIB libspdk_accel_dsa.a 00:02:53.343 SYMLINK libspdk_accel_error.so 00:02:53.343 LIB libspdk_blob_bdev.a 00:02:53.343 SYMLINK libspdk_scheduler_dynamic.so 00:02:53.343 SYMLINK libspdk_accel_ioat.so 00:02:53.343 SO libspdk_accel_dsa.so.5.0 00:02:53.343 SYMLINK libspdk_accel_iaa.so 00:02:53.343 SO libspdk_blob_bdev.so.11.0 00:02:53.343 SYMLINK libspdk_blob_bdev.so 00:02:53.343 SYMLINK libspdk_accel_dsa.so 00:02:53.600 LIB libspdk_vfu_device.a 00:02:53.600 SO libspdk_vfu_device.so.3.0 00:02:53.600 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.600 CC module/bdev/split/vbdev_split.o 00:02:53.600 CC module/bdev/gpt/gpt.o 00:02:53.600 CC module/bdev/error/vbdev_error.o 00:02:53.600 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.600 CC module/bdev/nvme/bdev_nvme.o 00:02:53.600 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.600 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.600 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.600 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.600 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.600 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.600 CC module/bdev/nvme/nvme_rpc.o 00:02:53.600 CC module/bdev/malloc/bdev_malloc.o 00:02:53.600 CC module/bdev/delay/vbdev_delay.o 00:02:53.600 CC module/bdev/null/bdev_null.o 00:02:53.600 CC module/bdev/aio/bdev_aio.o 00:02:53.600 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.600 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.600 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.600 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.600 CC module/bdev/null/bdev_null_rpc.o 00:02:53.600 CC module/bdev/nvme/vbdev_opal.o 00:02:53.600 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.600 CC module/bdev/raid/bdev_raid.o 00:02:53.600 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.600 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.600 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.600 CC module/bdev/raid/raid0.o 00:02:53.600 CC module/bdev/raid/raid1.o 00:02:53.600 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.600 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.600 CC module/bdev/ftl/bdev_ftl.o 00:02:53.600 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.600 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.600 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.600 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.600 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.600 CC module/bdev/raid/concat.o 00:02:53.600 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.600 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.600 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.600 SYMLINK libspdk_vfu_device.so 00:02:53.858 LIB libspdk_sock_posix.a 00:02:53.858 SO libspdk_sock_posix.so.6.0 00:02:53.858 LIB libspdk_bdev_gpt.a 00:02:53.858 SYMLINK libspdk_sock_posix.so 00:02:54.116 LIB libspdk_blobfs_bdev.a 00:02:54.116 SO libspdk_bdev_gpt.so.6.0 00:02:54.116 SO libspdk_blobfs_bdev.so.6.0 00:02:54.116 LIB libspdk_bdev_split.a 00:02:54.116 SYMLINK libspdk_bdev_gpt.so 00:02:54.116 SO libspdk_bdev_split.so.6.0 00:02:54.116 SYMLINK libspdk_blobfs_bdev.so 00:02:54.116 LIB libspdk_bdev_null.a 00:02:54.116 LIB libspdk_bdev_error.a 00:02:54.116 SO libspdk_bdev_error.so.6.0 00:02:54.116 SO libspdk_bdev_null.so.6.0 00:02:54.116 LIB libspdk_bdev_passthru.a 00:02:54.116 SYMLINK libspdk_bdev_split.so 00:02:54.116 LIB libspdk_bdev_zone_block.a 00:02:54.116 SO libspdk_bdev_passthru.so.6.0 00:02:54.116 LIB libspdk_bdev_ftl.a 00:02:54.116 SYMLINK libspdk_bdev_error.so 00:02:54.116 SYMLINK libspdk_bdev_null.so 00:02:54.116 SO libspdk_bdev_zone_block.so.6.0 00:02:54.116 LIB libspdk_bdev_malloc.a 00:02:54.116 LIB libspdk_bdev_delay.a 00:02:54.116 SO libspdk_bdev_ftl.so.6.0 00:02:54.116 LIB libspdk_bdev_iscsi.a 00:02:54.116 LIB libspdk_bdev_aio.a 00:02:54.116 SO libspdk_bdev_delay.so.6.0 00:02:54.116 SO libspdk_bdev_malloc.so.6.0 00:02:54.116 SYMLINK libspdk_bdev_passthru.so 00:02:54.116 SO libspdk_bdev_aio.so.6.0 00:02:54.116 SO libspdk_bdev_iscsi.so.6.0 00:02:54.116 SYMLINK libspdk_bdev_zone_block.so 00:02:54.373 SYMLINK libspdk_bdev_ftl.so 00:02:54.373 SYMLINK libspdk_bdev_delay.so 00:02:54.373 SYMLINK libspdk_bdev_malloc.so 00:02:54.373 SYMLINK libspdk_bdev_aio.so 00:02:54.373 SYMLINK libspdk_bdev_iscsi.so 00:02:54.373 LIB libspdk_bdev_virtio.a 00:02:54.373 SO libspdk_bdev_virtio.so.6.0 00:02:54.373 LIB libspdk_bdev_lvol.a 00:02:54.373 SO libspdk_bdev_lvol.so.6.0 00:02:54.373 SYMLINK libspdk_bdev_virtio.so 00:02:54.373 SYMLINK libspdk_bdev_lvol.so 00:02:54.632 LIB libspdk_bdev_raid.a 00:02:54.633 SO libspdk_bdev_raid.so.6.0 00:02:54.891 SYMLINK libspdk_bdev_raid.so 00:02:55.828 LIB libspdk_bdev_nvme.a 00:02:56.087 SO libspdk_bdev_nvme.so.7.0 00:02:56.087 SYMLINK libspdk_bdev_nvme.so 00:02:56.346 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:56.346 CC module/event/subsystems/keyring/keyring.o 00:02:56.346 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:56.346 CC module/event/subsystems/sock/sock.o 00:02:56.346 CC module/event/subsystems/iobuf/iobuf.o 00:02:56.346 CC module/event/subsystems/scheduler/scheduler.o 00:02:56.346 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:56.346 CC module/event/subsystems/vmd/vmd.o 00:02:56.346 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:56.604 LIB libspdk_event_keyring.a 00:02:56.604 LIB libspdk_event_vhost_blk.a 00:02:56.604 LIB libspdk_event_vmd.a 00:02:56.604 LIB libspdk_event_vfu_tgt.a 00:02:56.604 LIB libspdk_event_scheduler.a 00:02:56.604 LIB libspdk_event_sock.a 00:02:56.604 LIB libspdk_event_iobuf.a 00:02:56.604 SO libspdk_event_keyring.so.1.0 00:02:56.604 SO libspdk_event_vhost_blk.so.3.0 00:02:56.604 SO libspdk_event_sock.so.5.0 00:02:56.604 SO libspdk_event_vfu_tgt.so.3.0 00:02:56.604 SO libspdk_event_scheduler.so.4.0 00:02:56.604 SO libspdk_event_vmd.so.6.0 00:02:56.604 SO libspdk_event_iobuf.so.3.0 00:02:56.604 SYMLINK libspdk_event_keyring.so 00:02:56.604 SYMLINK libspdk_event_vhost_blk.so 00:02:56.604 SYMLINK libspdk_event_sock.so 00:02:56.604 SYMLINK libspdk_event_vfu_tgt.so 00:02:56.604 SYMLINK libspdk_event_scheduler.so 00:02:56.604 SYMLINK libspdk_event_vmd.so 00:02:56.604 SYMLINK libspdk_event_iobuf.so 00:02:56.862 CC module/event/subsystems/accel/accel.o 00:02:57.119 LIB libspdk_event_accel.a 00:02:57.119 SO libspdk_event_accel.so.6.0 00:02:57.119 SYMLINK libspdk_event_accel.so 00:02:57.377 CC module/event/subsystems/bdev/bdev.o 00:02:57.377 LIB libspdk_event_bdev.a 00:02:57.377 SO libspdk_event_bdev.so.6.0 00:02:57.634 SYMLINK libspdk_event_bdev.so 00:02:57.634 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.634 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.634 CC module/event/subsystems/scsi/scsi.o 00:02:57.634 CC module/event/subsystems/ublk/ublk.o 00:02:57.634 CC module/event/subsystems/nbd/nbd.o 00:02:57.893 LIB libspdk_event_nbd.a 00:02:57.893 LIB libspdk_event_ublk.a 00:02:57.893 LIB libspdk_event_scsi.a 00:02:57.893 SO libspdk_event_nbd.so.6.0 00:02:57.893 SO libspdk_event_ublk.so.3.0 00:02:57.893 SO libspdk_event_scsi.so.6.0 00:02:57.893 SYMLINK libspdk_event_ublk.so 00:02:57.893 SYMLINK libspdk_event_nbd.so 00:02:57.893 SYMLINK libspdk_event_scsi.so 00:02:57.893 LIB libspdk_event_nvmf.a 00:02:57.893 SO libspdk_event_nvmf.so.6.0 00:02:58.150 SYMLINK libspdk_event_nvmf.so 00:02:58.150 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.150 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.150 LIB libspdk_event_vhost_scsi.a 00:02:58.150 LIB libspdk_event_iscsi.a 00:02:58.150 SO libspdk_event_vhost_scsi.so.3.0 00:02:58.409 SO libspdk_event_iscsi.so.6.0 00:02:58.409 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.409 SYMLINK libspdk_event_iscsi.so 00:02:58.409 SO libspdk.so.6.0 00:02:58.409 SYMLINK libspdk.so 00:02:58.677 CC app/trace_record/trace_record.o 00:02:58.677 CXX app/trace/trace.o 00:02:58.677 TEST_HEADER include/spdk/accel.h 00:02:58.677 CC app/spdk_nvme_perf/perf.o 00:02:58.677 TEST_HEADER include/spdk/accel_module.h 00:02:58.677 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.677 TEST_HEADER include/spdk/assert.h 00:02:58.677 TEST_HEADER include/spdk/barrier.h 00:02:58.677 TEST_HEADER include/spdk/base64.h 00:02:58.677 CC app/spdk_top/spdk_top.o 00:02:58.677 TEST_HEADER include/spdk/bdev.h 00:02:58.677 TEST_HEADER include/spdk/bdev_module.h 00:02:58.677 TEST_HEADER include/spdk/bdev_zone.h 00:02:58.677 TEST_HEADER include/spdk/bit_array.h 00:02:58.677 CC app/spdk_nvme_identify/identify.o 00:02:58.677 TEST_HEADER include/spdk/bit_pool.h 00:02:58.678 TEST_HEADER include/spdk/blob_bdev.h 00:02:58.678 CC test/rpc_client/rpc_client_test.o 00:02:58.678 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:58.678 TEST_HEADER include/spdk/blobfs.h 00:02:58.678 TEST_HEADER include/spdk/blob.h 00:02:58.678 TEST_HEADER include/spdk/conf.h 00:02:58.678 TEST_HEADER include/spdk/config.h 00:02:58.678 CC app/spdk_lspci/spdk_lspci.o 00:02:58.678 TEST_HEADER include/spdk/crc16.h 00:02:58.678 TEST_HEADER include/spdk/cpuset.h 00:02:58.678 TEST_HEADER include/spdk/crc32.h 00:02:58.678 TEST_HEADER include/spdk/crc64.h 00:02:58.678 TEST_HEADER include/spdk/dif.h 00:02:58.678 TEST_HEADER include/spdk/dma.h 00:02:58.678 TEST_HEADER include/spdk/endian.h 00:02:58.678 TEST_HEADER include/spdk/env_dpdk.h 00:02:58.678 TEST_HEADER include/spdk/env.h 00:02:58.678 TEST_HEADER include/spdk/event.h 00:02:58.678 TEST_HEADER include/spdk/fd_group.h 00:02:58.678 TEST_HEADER include/spdk/fd.h 00:02:58.678 TEST_HEADER include/spdk/file.h 00:02:58.678 TEST_HEADER include/spdk/ftl.h 00:02:58.678 TEST_HEADER include/spdk/gpt_spec.h 00:02:58.678 TEST_HEADER include/spdk/hexlify.h 00:02:58.678 TEST_HEADER include/spdk/histogram_data.h 00:02:58.678 TEST_HEADER include/spdk/idxd.h 00:02:58.678 TEST_HEADER include/spdk/idxd_spec.h 00:02:58.678 TEST_HEADER include/spdk/init.h 00:02:58.678 TEST_HEADER include/spdk/ioat.h 00:02:58.678 TEST_HEADER include/spdk/ioat_spec.h 00:02:58.678 TEST_HEADER include/spdk/iscsi_spec.h 00:02:58.678 TEST_HEADER include/spdk/json.h 00:02:58.678 TEST_HEADER include/spdk/jsonrpc.h 00:02:58.678 TEST_HEADER include/spdk/keyring.h 00:02:58.678 TEST_HEADER include/spdk/keyring_module.h 00:02:58.678 TEST_HEADER include/spdk/likely.h 00:02:58.678 TEST_HEADER include/spdk/log.h 00:02:58.678 TEST_HEADER include/spdk/lvol.h 00:02:58.678 TEST_HEADER include/spdk/memory.h 00:02:58.678 TEST_HEADER include/spdk/mmio.h 00:02:58.678 TEST_HEADER include/spdk/nbd.h 00:02:58.678 TEST_HEADER include/spdk/nvme.h 00:02:58.678 TEST_HEADER include/spdk/notify.h 00:02:58.678 TEST_HEADER include/spdk/nvme_intel.h 00:02:58.678 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:58.678 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:58.678 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.678 TEST_HEADER include/spdk/nvme_spec.h 00:02:58.678 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.678 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.678 TEST_HEADER include/spdk/nvmf.h 00:02:58.678 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.678 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.678 TEST_HEADER include/spdk/opal.h 00:02:58.678 TEST_HEADER include/spdk/opal_spec.h 00:02:58.678 TEST_HEADER include/spdk/pci_ids.h 00:02:58.678 TEST_HEADER include/spdk/pipe.h 00:02:58.678 TEST_HEADER include/spdk/queue.h 00:02:58.678 TEST_HEADER include/spdk/rpc.h 00:02:58.678 TEST_HEADER include/spdk/reduce.h 00:02:58.678 TEST_HEADER include/spdk/scheduler.h 00:02:58.678 TEST_HEADER include/spdk/scsi.h 00:02:58.678 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.678 TEST_HEADER include/spdk/sock.h 00:02:58.678 TEST_HEADER include/spdk/stdinc.h 00:02:58.678 TEST_HEADER include/spdk/string.h 00:02:58.678 TEST_HEADER include/spdk/thread.h 00:02:58.678 TEST_HEADER include/spdk/trace.h 00:02:58.678 TEST_HEADER include/spdk/tree.h 00:02:58.678 TEST_HEADER include/spdk/trace_parser.h 00:02:58.678 TEST_HEADER include/spdk/ublk.h 00:02:58.678 TEST_HEADER include/spdk/util.h 00:02:58.678 TEST_HEADER include/spdk/uuid.h 00:02:58.678 TEST_HEADER include/spdk/version.h 00:02:58.678 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.678 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.678 TEST_HEADER include/spdk/vhost.h 00:02:58.678 TEST_HEADER include/spdk/xor.h 00:02:58.678 TEST_HEADER include/spdk/vmd.h 00:02:58.678 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.678 TEST_HEADER include/spdk/zipf.h 00:02:58.678 CXX test/cpp_headers/accel.o 00:02:58.678 CXX test/cpp_headers/accel_module.o 00:02:58.678 CXX test/cpp_headers/assert.o 00:02:58.678 CXX test/cpp_headers/barrier.o 00:02:58.678 CC app/spdk_dd/spdk_dd.o 00:02:58.678 CXX test/cpp_headers/base64.o 00:02:58.678 CXX test/cpp_headers/bdev.o 00:02:58.678 CXX test/cpp_headers/bdev_module.o 00:02:58.678 CXX test/cpp_headers/bdev_zone.o 00:02:58.678 CXX test/cpp_headers/bit_array.o 00:02:58.678 CXX test/cpp_headers/bit_pool.o 00:02:58.678 CXX test/cpp_headers/blob_bdev.o 00:02:58.678 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.678 CXX test/cpp_headers/blobfs.o 00:02:58.678 CXX test/cpp_headers/blob.o 00:02:58.678 CXX test/cpp_headers/conf.o 00:02:58.678 CXX test/cpp_headers/config.o 00:02:58.678 CXX test/cpp_headers/cpuset.o 00:02:58.678 CXX test/cpp_headers/crc16.o 00:02:58.678 CC app/nvmf_tgt/nvmf_main.o 00:02:58.678 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.678 CXX test/cpp_headers/crc32.o 00:02:58.678 CC test/thread/poller_perf/poller_perf.o 00:02:58.678 CC examples/util/zipf/zipf.o 00:02:58.678 CC test/env/vtophys/vtophys.o 00:02:58.678 CC app/spdk_tgt/spdk_tgt.o 00:02:58.678 CC test/app/jsoncat/jsoncat.o 00:02:58.678 CC examples/ioat/verify/verify.o 00:02:58.678 CC examples/ioat/perf/perf.o 00:02:58.678 CC test/env/pci/pci_ut.o 00:02:58.678 CC test/app/histogram_perf/histogram_perf.o 00:02:58.678 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.678 CC test/app/stub/stub.o 00:02:58.678 CC test/env/memory/memory_ut.o 00:02:58.678 CC app/fio/nvme/fio_plugin.o 00:02:58.936 CC test/dma/test_dma/test_dma.o 00:02:58.936 CC app/fio/bdev/fio_plugin.o 00:02:58.936 CC test/app/bdev_svc/bdev_svc.o 00:02:58.936 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.936 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.936 LINK spdk_lspci 00:02:58.936 LINK spdk_nvme_discover 00:02:58.936 LINK rpc_client_test 00:02:59.198 LINK interrupt_tgt 00:02:59.198 LINK nvmf_tgt 00:02:59.198 LINK histogram_perf 00:02:59.198 CXX test/cpp_headers/crc64.o 00:02:59.198 LINK zipf 00:02:59.198 CXX test/cpp_headers/dif.o 00:02:59.198 LINK jsoncat 00:02:59.198 LINK vtophys 00:02:59.198 LINK poller_perf 00:02:59.198 LINK spdk_trace_record 00:02:59.198 LINK env_dpdk_post_init 00:02:59.198 CXX test/cpp_headers/dma.o 00:02:59.198 CXX test/cpp_headers/endian.o 00:02:59.198 CXX test/cpp_headers/env_dpdk.o 00:02:59.198 CXX test/cpp_headers/event.o 00:02:59.198 CXX test/cpp_headers/env.o 00:02:59.198 CXX test/cpp_headers/fd_group.o 00:02:59.198 CXX test/cpp_headers/fd.o 00:02:59.198 CXX test/cpp_headers/file.o 00:02:59.198 LINK stub 00:02:59.198 CXX test/cpp_headers/ftl.o 00:02:59.198 CXX test/cpp_headers/gpt_spec.o 00:02:59.198 CXX test/cpp_headers/hexlify.o 00:02:59.198 LINK iscsi_tgt 00:02:59.198 LINK ioat_perf 00:02:59.198 CXX test/cpp_headers/histogram_data.o 00:02:59.198 CXX test/cpp_headers/idxd.o 00:02:59.198 LINK verify 00:02:59.198 LINK spdk_tgt 00:02:59.198 CXX test/cpp_headers/idxd_spec.o 00:02:59.198 LINK bdev_svc 00:02:59.198 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.198 CXX test/cpp_headers/init.o 00:02:59.458 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.458 CXX test/cpp_headers/ioat.o 00:02:59.458 CXX test/cpp_headers/ioat_spec.o 00:02:59.458 CXX test/cpp_headers/iscsi_spec.o 00:02:59.458 CXX test/cpp_headers/json.o 00:02:59.458 CXX test/cpp_headers/jsonrpc.o 00:02:59.458 CXX test/cpp_headers/keyring.o 00:02:59.458 CXX test/cpp_headers/keyring_module.o 00:02:59.458 CXX test/cpp_headers/likely.o 00:02:59.458 LINK spdk_dd 00:02:59.458 CXX test/cpp_headers/log.o 00:02:59.458 CXX test/cpp_headers/lvol.o 00:02:59.458 CXX test/cpp_headers/memory.o 00:02:59.458 CXX test/cpp_headers/mmio.o 00:02:59.458 CXX test/cpp_headers/nbd.o 00:02:59.458 CXX test/cpp_headers/notify.o 00:02:59.458 LINK pci_ut 00:02:59.458 CXX test/cpp_headers/nvme.o 00:02:59.458 CXX test/cpp_headers/nvme_intel.o 00:02:59.458 CXX test/cpp_headers/nvme_ocssd.o 00:02:59.458 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:59.458 CXX test/cpp_headers/nvme_spec.o 00:02:59.458 CXX test/cpp_headers/nvme_zns.o 00:02:59.458 CXX test/cpp_headers/nvmf_cmd.o 00:02:59.719 LINK spdk_trace 00:02:59.719 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:59.719 LINK test_dma 00:02:59.719 CXX test/cpp_headers/nvmf.o 00:02:59.719 CXX test/cpp_headers/nvmf_spec.o 00:02:59.719 CXX test/cpp_headers/nvmf_transport.o 00:02:59.719 CXX test/cpp_headers/opal.o 00:02:59.719 CXX test/cpp_headers/opal_spec.o 00:02:59.719 LINK nvme_fuzz 00:02:59.719 CC test/event/event_perf/event_perf.o 00:02:59.719 CXX test/cpp_headers/pci_ids.o 00:02:59.719 CC test/event/reactor/reactor.o 00:02:59.719 LINK spdk_bdev 00:02:59.719 CXX test/cpp_headers/pipe.o 00:02:59.719 CC test/event/reactor_perf/reactor_perf.o 00:02:59.978 CXX test/cpp_headers/queue.o 00:02:59.978 CC examples/sock/hello_world/hello_sock.o 00:02:59.978 CC examples/thread/thread/thread_ex.o 00:02:59.978 CC examples/idxd/perf/perf.o 00:02:59.978 CC test/event/app_repeat/app_repeat.o 00:02:59.978 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.978 CXX test/cpp_headers/reduce.o 00:02:59.978 CXX test/cpp_headers/rpc.o 00:02:59.978 LINK spdk_nvme 00:02:59.978 CXX test/cpp_headers/scheduler.o 00:02:59.978 CXX test/cpp_headers/scsi.o 00:02:59.978 CXX test/cpp_headers/scsi_spec.o 00:02:59.978 CXX test/cpp_headers/sock.o 00:02:59.978 CC examples/vmd/led/led.o 00:02:59.978 CXX test/cpp_headers/stdinc.o 00:02:59.978 CC test/event/scheduler/scheduler.o 00:02:59.978 CXX test/cpp_headers/string.o 00:02:59.978 CXX test/cpp_headers/thread.o 00:02:59.978 CXX test/cpp_headers/trace.o 00:02:59.978 CXX test/cpp_headers/trace_parser.o 00:02:59.978 CXX test/cpp_headers/tree.o 00:02:59.978 CXX test/cpp_headers/ublk.o 00:02:59.978 CXX test/cpp_headers/util.o 00:02:59.978 CXX test/cpp_headers/uuid.o 00:02:59.978 CXX test/cpp_headers/version.o 00:02:59.978 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.978 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.978 CXX test/cpp_headers/vhost.o 00:02:59.978 CXX test/cpp_headers/vmd.o 00:02:59.978 CXX test/cpp_headers/xor.o 00:02:59.978 CXX test/cpp_headers/zipf.o 00:03:00.241 LINK event_perf 00:03:00.241 LINK reactor 00:03:00.241 LINK reactor_perf 00:03:00.241 LINK vhost_fuzz 00:03:00.241 LINK mem_callbacks 00:03:00.241 LINK lsvmd 00:03:00.241 LINK spdk_nvme_perf 00:03:00.241 LINK app_repeat 00:03:00.241 CC app/vhost/vhost.o 00:03:00.241 LINK spdk_nvme_identify 00:03:00.241 LINK led 00:03:00.241 LINK thread 00:03:00.504 CC test/nvme/sgl/sgl.o 00:03:00.504 LINK hello_sock 00:03:00.504 CC test/nvme/reset/reset.o 00:03:00.504 CC test/nvme/overhead/overhead.o 00:03:00.504 CC test/nvme/startup/startup.o 00:03:00.504 CC test/nvme/err_injection/err_injection.o 00:03:00.504 CC test/nvme/aer/aer.o 00:03:00.504 CC test/nvme/e2edp/nvme_dp.o 00:03:00.504 LINK spdk_top 00:03:00.504 CC test/accel/dif/dif.o 00:03:00.504 CC test/nvme/reserve/reserve.o 00:03:00.504 CC test/blobfs/mkfs/mkfs.o 00:03:00.504 CC test/nvme/connect_stress/connect_stress.o 00:03:00.504 LINK scheduler 00:03:00.504 CC test/nvme/simple_copy/simple_copy.o 00:03:00.504 CC test/nvme/boot_partition/boot_partition.o 00:03:00.504 CC test/nvme/compliance/nvme_compliance.o 00:03:00.504 CC test/lvol/esnap/esnap.o 00:03:00.504 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.504 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.504 CC test/nvme/fdp/fdp.o 00:03:00.504 CC test/nvme/cuse/cuse.o 00:03:00.504 LINK idxd_perf 00:03:00.504 LINK vhost 00:03:00.763 LINK startup 00:03:00.763 LINK err_injection 00:03:00.763 LINK connect_stress 00:03:00.763 LINK boot_partition 00:03:00.763 LINK mkfs 00:03:00.763 LINK doorbell_aers 00:03:00.763 LINK reserve 00:03:00.763 LINK reset 00:03:00.763 LINK fused_ordering 00:03:00.763 CC examples/nvme/hotplug/hotplug.o 00:03:00.763 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.763 CC examples/nvme/abort/abort.o 00:03:00.763 CC examples/nvme/arbitration/arbitration.o 00:03:00.763 CC examples/nvme/reconnect/reconnect.o 00:03:00.763 CC examples/nvme/hello_world/hello_world.o 00:03:00.763 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.763 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.763 LINK aer 00:03:00.763 LINK nvme_compliance 00:03:00.763 LINK simple_copy 00:03:01.021 LINK nvme_dp 00:03:01.021 LINK sgl 00:03:01.021 LINK overhead 00:03:01.021 LINK dif 00:03:01.021 CC examples/accel/perf/accel_perf.o 00:03:01.021 LINK memory_ut 00:03:01.021 CC examples/blob/hello_world/hello_blob.o 00:03:01.021 CC examples/blob/cli/blobcli.o 00:03:01.021 LINK fdp 00:03:01.021 LINK pmr_persistence 00:03:01.021 LINK cmb_copy 00:03:01.021 LINK hotplug 00:03:01.279 LINK arbitration 00:03:01.279 LINK hello_world 00:03:01.279 LINK abort 00:03:01.279 LINK reconnect 00:03:01.279 LINK hello_blob 00:03:01.279 LINK nvme_manage 00:03:01.279 CC test/bdev/bdevio/bdevio.o 00:03:01.537 LINK accel_perf 00:03:01.537 LINK blobcli 00:03:01.794 LINK iscsi_fuzz 00:03:01.794 LINK bdevio 00:03:01.794 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.794 CC examples/bdev/bdevperf/bdevperf.o 00:03:02.051 LINK cuse 00:03:02.051 LINK hello_bdev 00:03:02.615 LINK bdevperf 00:03:02.873 CC examples/nvmf/nvmf/nvmf.o 00:03:03.437 LINK nvmf 00:03:05.962 LINK esnap 00:03:05.962 00:03:05.962 real 0m41.363s 00:03:05.962 user 7m23.594s 00:03:05.962 sys 1m49.828s 00:03:05.962 15:14:36 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:05.962 15:14:36 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.962 ************************************ 00:03:05.962 END TEST make 00:03:05.962 ************************************ 00:03:05.962 15:14:36 -- common/autotest_common.sh@1142 -- $ return 0 00:03:05.962 15:14:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.962 15:14:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.962 15:14:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.962 15:14:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.962 15:14:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.962 15:14:36 -- pm/common@44 -- $ pid=868983 00:03:05.962 15:14:36 -- pm/common@50 -- $ kill -TERM 868983 00:03:05.962 15:14:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.962 15:14:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.962 15:14:36 -- pm/common@44 -- $ pid=868985 00:03:05.962 15:14:36 -- pm/common@50 -- $ kill -TERM 868985 00:03:05.962 15:14:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.962 15:14:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.962 15:14:36 -- pm/common@44 -- $ pid=868987 00:03:05.962 15:14:36 -- pm/common@50 -- $ kill -TERM 868987 00:03:05.962 15:14:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.962 15:14:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.962 15:14:36 -- pm/common@44 -- $ pid=869015 00:03:05.962 15:14:36 -- pm/common@50 -- $ sudo -E kill -TERM 869015 00:03:05.962 15:14:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.962 15:14:36 -- nvmf/common.sh@7 -- # uname -s 00:03:05.962 15:14:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.962 15:14:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.962 15:14:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.962 15:14:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.962 15:14:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.962 15:14:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.962 15:14:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.962 15:14:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.962 15:14:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.962 15:14:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.962 15:14:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:05.962 15:14:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:05.962 15:14:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.962 15:14:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.962 15:14:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.962 15:14:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.962 15:14:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:05.963 15:14:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.963 15:14:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.963 15:14:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.963 15:14:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.963 15:14:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.963 15:14:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.963 15:14:36 -- paths/export.sh@5 -- # export PATH 00:03:05.963 15:14:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.963 15:14:36 -- nvmf/common.sh@47 -- # : 0 00:03:05.963 15:14:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:05.963 15:14:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:05.963 15:14:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.963 15:14:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.963 15:14:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.963 15:14:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:05.963 15:14:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:05.963 15:14:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:05.963 15:14:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.963 15:14:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.963 15:14:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.963 15:14:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.963 15:14:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.963 15:14:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.963 15:14:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.963 15:14:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.963 15:14:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.963 15:14:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.963 15:14:36 -- spdk/autotest.sh@48 -- # udevadm_pid=940196 00:03:05.963 15:14:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.963 15:14:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.963 15:14:36 -- pm/common@17 -- # local monitor 00:03:05.963 15:14:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.963 15:14:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.963 15:14:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.963 15:14:36 -- pm/common@21 -- # date +%s 00:03:05.963 15:14:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.963 15:14:36 -- pm/common@21 -- # date +%s 00:03:05.963 15:14:36 -- pm/common@25 -- # sleep 1 00:03:05.963 15:14:36 -- pm/common@21 -- # date +%s 00:03:05.963 15:14:36 -- pm/common@21 -- # date +%s 00:03:05.963 15:14:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720876476 00:03:05.963 15:14:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720876476 00:03:05.963 15:14:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720876476 00:03:05.963 15:14:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720876476 00:03:05.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720876476_collect-vmstat.pm.log 00:03:05.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720876476_collect-cpu-load.pm.log 00:03:05.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720876476_collect-cpu-temp.pm.log 00:03:05.963 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720876476_collect-bmc-pm.bmc.pm.log 00:03:06.928 15:14:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.928 15:14:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.928 15:14:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:06.928 15:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:06.928 15:14:37 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.928 15:14:37 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:06.928 15:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:07.186 15:14:37 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:07.186 15:14:37 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.186 15:14:37 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.186 15:14:37 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:07.186 15:14:37 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.186 15:14:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:07.186 15:14:37 -- common/autotest_common.sh@1455 -- # uname 00:03:07.186 15:14:37 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:07.186 15:14:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:07.186 15:14:37 -- common/autotest_common.sh@1475 -- # uname 00:03:07.186 15:14:37 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:07.186 15:14:37 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:07.186 15:14:37 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:07.186 15:14:37 -- spdk/autotest.sh@72 -- # hash lcov 00:03:07.186 15:14:37 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:07.186 15:14:37 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:07.186 --rc lcov_branch_coverage=1 00:03:07.186 --rc lcov_function_coverage=1 00:03:07.186 --rc genhtml_branch_coverage=1 00:03:07.186 --rc genhtml_function_coverage=1 00:03:07.186 --rc genhtml_legend=1 00:03:07.186 --rc geninfo_all_blocks=1 00:03:07.186 ' 00:03:07.186 15:14:37 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:07.186 --rc lcov_branch_coverage=1 00:03:07.186 --rc lcov_function_coverage=1 00:03:07.186 --rc genhtml_branch_coverage=1 00:03:07.186 --rc genhtml_function_coverage=1 00:03:07.186 --rc genhtml_legend=1 00:03:07.186 --rc geninfo_all_blocks=1 00:03:07.186 ' 00:03:07.186 15:14:37 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:07.186 --rc lcov_branch_coverage=1 00:03:07.186 --rc lcov_function_coverage=1 00:03:07.186 --rc genhtml_branch_coverage=1 00:03:07.186 --rc genhtml_function_coverage=1 00:03:07.187 --rc genhtml_legend=1 00:03:07.187 --rc geninfo_all_blocks=1 00:03:07.187 --no-external' 00:03:07.187 15:14:37 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:07.187 --rc lcov_branch_coverage=1 00:03:07.187 --rc lcov_function_coverage=1 00:03:07.187 --rc genhtml_branch_coverage=1 00:03:07.187 --rc genhtml_function_coverage=1 00:03:07.187 --rc genhtml_legend=1 00:03:07.187 --rc geninfo_all_blocks=1 00:03:07.187 --no-external' 00:03:07.187 15:14:37 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:07.187 lcov: LCOV version 1.14 00:03:07.187 15:14:37 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:12.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:12.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:12.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:12.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:12.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:12.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:12.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:12.447 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:12.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:12.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:12.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:12.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:12.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:12.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:12.706 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:12.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:12.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:12.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:39.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.238 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:45.794 15:15:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:45.794 15:15:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.794 15:15:16 -- common/autotest_common.sh@10 -- # set +x 00:03:45.794 15:15:16 -- spdk/autotest.sh@91 -- # rm -f 00:03:45.794 15:15:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.167 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:47.167 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:47.167 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:47.167 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:47.167 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:47.167 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:47.167 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:47.167 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:47.167 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:47.167 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:47.167 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:47.167 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:47.167 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:47.167 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:47.167 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:47.167 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:47.167 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:47.167 15:15:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:47.167 15:15:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:47.168 15:15:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:47.168 15:15:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:47.168 15:15:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.168 15:15:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:47.168 15:15:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:47.168 15:15:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.168 15:15:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.168 15:15:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:47.168 15:15:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.168 15:15:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.168 15:15:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:47.168 15:15:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:47.168 15:15:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.168 No valid GPT data, bailing 00:03:47.426 15:15:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.426 15:15:17 -- scripts/common.sh@391 -- # pt= 00:03:47.426 15:15:17 -- scripts/common.sh@392 -- # return 1 00:03:47.426 15:15:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.426 1+0 records in 00:03:47.426 1+0 records out 00:03:47.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00388364 s, 270 MB/s 00:03:47.426 15:15:17 -- spdk/autotest.sh@118 -- # sync 00:03:47.426 15:15:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.426 15:15:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.426 15:15:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:49.328 15:15:19 -- spdk/autotest.sh@124 -- # uname -s 00:03:49.328 15:15:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:49.328 15:15:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:49.328 15:15:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.328 15:15:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.328 15:15:19 -- common/autotest_common.sh@10 -- # set +x 00:03:49.328 ************************************ 00:03:49.328 START TEST setup.sh 00:03:49.328 ************************************ 00:03:49.328 15:15:19 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:49.328 * Looking for test storage... 00:03:49.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.328 15:15:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:49.328 15:15:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:49.328 15:15:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:49.328 15:15:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.328 15:15:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.328 15:15:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.328 ************************************ 00:03:49.328 START TEST acl 00:03:49.328 ************************************ 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:49.328 * Looking for test storage... 00:03:49.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.328 15:15:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.328 15:15:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.328 15:15:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:49.328 15:15:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:49.328 15:15:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:49.328 15:15:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:49.328 15:15:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:49.328 15:15:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.328 15:15:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.738 15:15:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:50.738 15:15:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:50.738 15:15:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.738 15:15:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:50.738 15:15:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.738 15:15:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.671 Hugepages 00:03:51.671 node hugesize free / total 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 00:03:51.671 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.671 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:51.672 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:51.930 15:15:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:51.930 15:15:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.930 15:15:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.930 15:15:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.930 ************************************ 00:03:51.930 START TEST denied 00:03:51.930 ************************************ 00:03:51.930 15:15:22 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:51.930 15:15:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:51.930 15:15:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:51.930 15:15:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:51.930 15:15:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.930 15:15:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.301 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.301 15:15:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.832 00:03:55.832 real 0m3.753s 00:03:55.832 user 0m1.102s 00:03:55.832 sys 0m1.739s 00:03:55.832 15:15:26 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.832 15:15:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:55.832 ************************************ 00:03:55.832 END TEST denied 00:03:55.832 ************************************ 00:03:55.832 15:15:26 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:55.832 15:15:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:55.832 15:15:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.832 15:15:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.832 15:15:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.832 ************************************ 00:03:55.832 START TEST allowed 00:03:55.832 ************************************ 00:03:55.832 15:15:26 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:55.832 15:15:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:55.832 15:15:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:55.832 15:15:26 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:55.832 15:15:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.832 15:15:26 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.366 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.366 15:15:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:58.366 15:15:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:58.366 15:15:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:58.366 15:15:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.366 15:15:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.743 00:03:59.743 real 0m3.825s 00:03:59.743 user 0m0.993s 00:03:59.743 sys 0m1.678s 00:03:59.743 15:15:30 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.743 15:15:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:59.743 ************************************ 00:03:59.743 END TEST allowed 00:03:59.743 ************************************ 00:03:59.743 15:15:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:59.743 00:03:59.743 real 0m10.378s 00:03:59.743 user 0m3.174s 00:03:59.743 sys 0m5.200s 00:03:59.743 15:15:30 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.743 15:15:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:59.743 ************************************ 00:03:59.743 END TEST acl 00:03:59.743 ************************************ 00:03:59.743 15:15:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.743 15:15:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.743 15:15:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.743 15:15:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.743 15:15:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.743 ************************************ 00:03:59.743 START TEST hugepages 00:03:59.743 ************************************ 00:03:59.743 15:15:30 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.743 * Looking for test storage... 00:03:59.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42166348 kB' 'MemAvailable: 45673844 kB' 'Buffers: 2704 kB' 'Cached: 11794716 kB' 'SwapCached: 0 kB' 'Active: 8800020 kB' 'Inactive: 3506552 kB' 'Active(anon): 8405668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512812 kB' 'Mapped: 208896 kB' 'Shmem: 7896516 kB' 'KReclaimable: 200428 kB' 'Slab: 576320 kB' 'SReclaimable: 200428 kB' 'SUnreclaim: 375892 kB' 'KernelStack: 12800 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9526864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.744 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.745 15:15:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.745 15:15:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.745 15:15:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.745 15:15:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.745 ************************************ 00:03:59.745 START TEST default_setup 00:03:59.745 ************************************ 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.745 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.746 15:15:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.682 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.682 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.940 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.940 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.940 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.940 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.940 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.940 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.940 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.881 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44286976 kB' 'MemAvailable: 47794460 kB' 'Buffers: 2704 kB' 'Cached: 11794816 kB' 'SwapCached: 0 kB' 'Active: 8819036 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531476 kB' 'Mapped: 209004 kB' 'Shmem: 7896616 kB' 'KReclaimable: 200404 kB' 'Slab: 576348 kB' 'SReclaimable: 200404 kB' 'SUnreclaim: 375944 kB' 'KernelStack: 12752 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.881 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.882 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44287228 kB' 'MemAvailable: 47794680 kB' 'Buffers: 2704 kB' 'Cached: 11794816 kB' 'SwapCached: 0 kB' 'Active: 8819028 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424676 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531500 kB' 'Mapped: 209004 kB' 'Shmem: 7896616 kB' 'KReclaimable: 200340 kB' 'Slab: 576260 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 375920 kB' 'KernelStack: 12816 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.883 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.884 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44292124 kB' 'MemAvailable: 47799576 kB' 'Buffers: 2704 kB' 'Cached: 11794836 kB' 'SwapCached: 0 kB' 'Active: 8818864 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424512 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531288 kB' 'Mapped: 208944 kB' 'Shmem: 7896636 kB' 'KReclaimable: 200340 kB' 'Slab: 576348 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 376008 kB' 'KernelStack: 12768 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:01.885 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.886 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.147 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.148 nr_hugepages=1024 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.148 resv_hugepages=0 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.148 surplus_hugepages=0 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.148 anon_hugepages=0 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44291116 kB' 'MemAvailable: 47798568 kB' 'Buffers: 2704 kB' 'Cached: 11794856 kB' 'SwapCached: 0 kB' 'Active: 8818856 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424504 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531252 kB' 'Mapped: 208944 kB' 'Shmem: 7896656 kB' 'KReclaimable: 200340 kB' 'Slab: 576348 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 376008 kB' 'KernelStack: 12752 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.148 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.149 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26651056 kB' 'MemUsed: 6178828 kB' 'SwapCached: 0 kB' 'Active: 3028524 kB' 'Inactive: 108448 kB' 'Active(anon): 2917636 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839432 kB' 'Mapped: 49460 kB' 'AnonPages: 300780 kB' 'Shmem: 2620096 kB' 'KernelStack: 7288 kB' 'PageTables: 4796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311572 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 215992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.150 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.151 node0=1024 expecting 1024 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.151 00:04:02.151 real 0m2.368s 00:04:02.151 user 0m0.658s 00:04:02.151 sys 0m0.825s 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.151 15:15:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:02.151 ************************************ 00:04:02.151 END TEST default_setup 00:04:02.151 ************************************ 00:04:02.151 15:15:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.151 15:15:32 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:02.151 15:15:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.151 15:15:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.151 15:15:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.151 ************************************ 00:04:02.151 START TEST per_node_1G_alloc 00:04:02.151 ************************************ 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.151 15:15:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.529 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.529 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.529 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.529 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.529 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.529 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.529 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.529 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.529 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.529 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:03.529 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:03.529 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:03.529 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:03.529 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:03.529 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:03.529 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:03.529 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44286536 kB' 'MemAvailable: 47793988 kB' 'Buffers: 2704 kB' 'Cached: 11794924 kB' 'SwapCached: 0 kB' 'Active: 8819596 kB' 'Inactive: 3506552 kB' 'Active(anon): 8425244 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531744 kB' 'Mapped: 208980 kB' 'Shmem: 7896724 kB' 'KReclaimable: 200340 kB' 'Slab: 576596 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 376256 kB' 'KernelStack: 12768 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.529 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.530 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44287296 kB' 'MemAvailable: 47794748 kB' 'Buffers: 2704 kB' 'Cached: 11794924 kB' 'SwapCached: 0 kB' 'Active: 8819208 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424856 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531348 kB' 'Mapped: 209000 kB' 'Shmem: 7896724 kB' 'KReclaimable: 200340 kB' 'Slab: 576576 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 376236 kB' 'KernelStack: 12768 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.531 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.532 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44286696 kB' 'MemAvailable: 47794148 kB' 'Buffers: 2704 kB' 'Cached: 11794944 kB' 'SwapCached: 0 kB' 'Active: 8819284 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531384 kB' 'Mapped: 208920 kB' 'Shmem: 7896744 kB' 'KReclaimable: 200340 kB' 'Slab: 576568 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 376228 kB' 'KernelStack: 12768 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.533 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.534 nr_hugepages=1024 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.534 resv_hugepages=0 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.534 surplus_hugepages=0 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.534 anon_hugepages=0 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.534 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44286696 kB' 'MemAvailable: 47794148 kB' 'Buffers: 2704 kB' 'Cached: 11794968 kB' 'SwapCached: 0 kB' 'Active: 8819284 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531344 kB' 'Mapped: 208920 kB' 'Shmem: 7896768 kB' 'KReclaimable: 200340 kB' 'Slab: 576568 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 376228 kB' 'KernelStack: 12752 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9547900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.535 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27698428 kB' 'MemUsed: 5131456 kB' 'SwapCached: 0 kB' 'Active: 3029212 kB' 'Inactive: 108448 kB' 'Active(anon): 2918324 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839540 kB' 'Mapped: 49472 kB' 'AnonPages: 301304 kB' 'Shmem: 2620204 kB' 'KernelStack: 7320 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311728 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 216148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.536 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.537 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16588268 kB' 'MemUsed: 11123556 kB' 'SwapCached: 0 kB' 'Active: 5790104 kB' 'Inactive: 3398104 kB' 'Active(anon): 5506640 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8958152 kB' 'Mapped: 159448 kB' 'AnonPages: 230080 kB' 'Shmem: 5276584 kB' 'KernelStack: 5448 kB' 'PageTables: 3052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104760 kB' 'Slab: 264840 kB' 'SReclaimable: 104760 kB' 'SUnreclaim: 160080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.538 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.539 node0=512 expecting 512 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:03.539 node1=512 expecting 512 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.539 00:04:03.539 real 0m1.448s 00:04:03.539 user 0m0.614s 00:04:03.539 sys 0m0.797s 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.539 15:15:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.539 ************************************ 00:04:03.539 END TEST per_node_1G_alloc 00:04:03.539 ************************************ 00:04:03.539 15:15:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.539 15:15:34 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.539 15:15:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.539 15:15:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.539 15:15:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.539 ************************************ 00:04:03.539 START TEST even_2G_alloc 00:04:03.539 ************************************ 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.539 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.540 15:15:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.918 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.918 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.918 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.918 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.918 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.918 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.918 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.918 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.918 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.918 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.918 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.918 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.918 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.918 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.918 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.918 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.918 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44278196 kB' 'MemAvailable: 47785648 kB' 'Buffers: 2704 kB' 'Cached: 11795056 kB' 'SwapCached: 0 kB' 'Active: 8825212 kB' 'Inactive: 3506552 kB' 'Active(anon): 8430860 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537224 kB' 'Mapped: 209840 kB' 'Shmem: 7896856 kB' 'KReclaimable: 200340 kB' 'Slab: 576252 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 375912 kB' 'KernelStack: 12736 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9554392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196132 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.918 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.919 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44280992 kB' 'MemAvailable: 47788444 kB' 'Buffers: 2704 kB' 'Cached: 11795060 kB' 'SwapCached: 0 kB' 'Active: 8820356 kB' 'Inactive: 3506552 kB' 'Active(anon): 8426004 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532400 kB' 'Mapped: 209840 kB' 'Shmem: 7896860 kB' 'KReclaimable: 200340 kB' 'Slab: 576236 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 375896 kB' 'KernelStack: 12784 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9550040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.920 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.921 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44273776 kB' 'MemAvailable: 47781228 kB' 'Buffers: 2704 kB' 'Cached: 11795076 kB' 'SwapCached: 0 kB' 'Active: 8824624 kB' 'Inactive: 3506552 kB' 'Active(anon): 8430272 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536660 kB' 'Mapped: 209372 kB' 'Shmem: 7896876 kB' 'KReclaimable: 200340 kB' 'Slab: 576296 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 375956 kB' 'KernelStack: 12784 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9554432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.922 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.923 nr_hugepages=1024 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.923 resv_hugepages=0 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.923 surplus_hugepages=0 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.923 anon_hugepages=0 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.923 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44273776 kB' 'MemAvailable: 47781228 kB' 'Buffers: 2704 kB' 'Cached: 11795096 kB' 'SwapCached: 0 kB' 'Active: 8825088 kB' 'Inactive: 3506552 kB' 'Active(anon): 8430736 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537152 kB' 'Mapped: 209724 kB' 'Shmem: 7896896 kB' 'KReclaimable: 200340 kB' 'Slab: 576296 kB' 'SReclaimable: 200340 kB' 'SUnreclaim: 375956 kB' 'KernelStack: 12800 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9554452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.924 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27711408 kB' 'MemUsed: 5118476 kB' 'SwapCached: 0 kB' 'Active: 3029296 kB' 'Inactive: 108448 kB' 'Active(anon): 2918408 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839624 kB' 'Mapped: 49684 kB' 'AnonPages: 301336 kB' 'Shmem: 2620288 kB' 'KernelStack: 7304 kB' 'PageTables: 5148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311672 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 216092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.925 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.926 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16562804 kB' 'MemUsed: 11149020 kB' 'SwapCached: 0 kB' 'Active: 5790044 kB' 'Inactive: 3398104 kB' 'Active(anon): 5506580 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8958176 kB' 'Mapped: 159452 kB' 'AnonPages: 230080 kB' 'Shmem: 5276608 kB' 'KernelStack: 5480 kB' 'PageTables: 3084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104760 kB' 'Slab: 264624 kB' 'SReclaimable: 104760 kB' 'SUnreclaim: 159864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.927 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.928 node0=512 expecting 512 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:04.928 node1=512 expecting 512 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.928 00:04:04.928 real 0m1.397s 00:04:04.928 user 0m0.537s 00:04:04.928 sys 0m0.821s 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.928 15:15:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.928 ************************************ 00:04:04.928 END TEST even_2G_alloc 00:04:04.928 ************************************ 00:04:04.928 15:15:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:04.928 15:15:35 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:04.928 15:15:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.928 15:15:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.928 15:15:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.218 ************************************ 00:04:05.218 START TEST odd_alloc 00:04:05.218 ************************************ 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.218 15:15:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.155 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.155 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.155 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.155 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.155 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.155 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.155 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.155 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.155 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.155 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:06.155 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:06.155 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:06.155 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:06.155 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:06.155 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:06.155 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:06.155 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44264776 kB' 'MemAvailable: 47772224 kB' 'Buffers: 2704 kB' 'Cached: 11795196 kB' 'SwapCached: 0 kB' 'Active: 8817008 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422656 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528860 kB' 'Mapped: 208128 kB' 'Shmem: 7896996 kB' 'KReclaimable: 200332 kB' 'Slab: 575984 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375652 kB' 'KernelStack: 12704 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9535460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.155 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.419 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.420 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44267688 kB' 'MemAvailable: 47775136 kB' 'Buffers: 2704 kB' 'Cached: 11795196 kB' 'SwapCached: 0 kB' 'Active: 8818124 kB' 'Inactive: 3506552 kB' 'Active(anon): 8423772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530088 kB' 'Mapped: 208204 kB' 'Shmem: 7896996 kB' 'KReclaimable: 200332 kB' 'Slab: 575984 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375652 kB' 'KernelStack: 12864 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9535476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.422 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44276440 kB' 'MemAvailable: 47783888 kB' 'Buffers: 2704 kB' 'Cached: 11795200 kB' 'SwapCached: 0 kB' 'Active: 8817716 kB' 'Inactive: 3506552 kB' 'Active(anon): 8423364 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529648 kB' 'Mapped: 208076 kB' 'Shmem: 7897000 kB' 'KReclaimable: 200332 kB' 'Slab: 575964 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375632 kB' 'KernelStack: 13008 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9536864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.423 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.424 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:06.425 nr_hugepages=1025 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.425 resv_hugepages=0 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.425 surplus_hugepages=0 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.425 anon_hugepages=0 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.425 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44277120 kB' 'MemAvailable: 47784568 kB' 'Buffers: 2704 kB' 'Cached: 11795236 kB' 'SwapCached: 0 kB' 'Active: 8817044 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422692 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528404 kB' 'Mapped: 208076 kB' 'Shmem: 7897036 kB' 'KReclaimable: 200332 kB' 'Slab: 575900 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375568 kB' 'KernelStack: 12976 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9536888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.426 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.427 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27713492 kB' 'MemUsed: 5116392 kB' 'SwapCached: 0 kB' 'Active: 3027180 kB' 'Inactive: 108448 kB' 'Active(anon): 2916292 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839712 kB' 'Mapped: 48772 kB' 'AnonPages: 298996 kB' 'Shmem: 2620376 kB' 'KernelStack: 7496 kB' 'PageTables: 6256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311436 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 215856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.428 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16572224 kB' 'MemUsed: 11139600 kB' 'SwapCached: 0 kB' 'Active: 5790464 kB' 'Inactive: 3398104 kB' 'Active(anon): 5507000 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8958264 kB' 'Mapped: 159348 kB' 'AnonPages: 230476 kB' 'Shmem: 5276696 kB' 'KernelStack: 5496 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104752 kB' 'Slab: 264448 kB' 'SReclaimable: 104752 kB' 'SUnreclaim: 159696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.429 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.430 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:06.431 node0=512 expecting 513 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:06.431 node1=513 expecting 512 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:06.431 00:04:06.431 real 0m1.371s 00:04:06.431 user 0m0.596s 00:04:06.431 sys 0m0.735s 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.431 15:15:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.431 ************************************ 00:04:06.431 END TEST odd_alloc 00:04:06.431 ************************************ 00:04:06.431 15:15:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.431 15:15:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:06.431 15:15:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.431 15:15:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.431 15:15:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.431 ************************************ 00:04:06.431 START TEST custom_alloc 00:04:06.431 ************************************ 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.431 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.432 15:15:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.814 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.814 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.814 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.814 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.814 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.814 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.814 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.814 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.814 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.814 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.814 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.814 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.814 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.814 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.814 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.814 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.814 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.814 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43231308 kB' 'MemAvailable: 46738756 kB' 'Buffers: 2704 kB' 'Cached: 11795320 kB' 'SwapCached: 0 kB' 'Active: 8816784 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422432 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528508 kB' 'Mapped: 208176 kB' 'Shmem: 7897120 kB' 'KReclaimable: 200332 kB' 'Slab: 576032 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375700 kB' 'KernelStack: 12736 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9534876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43231712 kB' 'MemAvailable: 46739160 kB' 'Buffers: 2704 kB' 'Cached: 11795320 kB' 'SwapCached: 0 kB' 'Active: 8816860 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422508 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528544 kB' 'Mapped: 208100 kB' 'Shmem: 7897120 kB' 'KReclaimable: 200332 kB' 'Slab: 576032 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375700 kB' 'KernelStack: 12784 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9534892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.816 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.817 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43231888 kB' 'MemAvailable: 46739336 kB' 'Buffers: 2704 kB' 'Cached: 11795320 kB' 'SwapCached: 0 kB' 'Active: 8816928 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422576 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528692 kB' 'Mapped: 208100 kB' 'Shmem: 7897120 kB' 'KReclaimable: 200332 kB' 'Slab: 576004 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375672 kB' 'KernelStack: 12832 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9534916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:07.820 nr_hugepages=1536 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.820 resv_hugepages=0 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.820 surplus_hugepages=0 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.820 anon_hugepages=0 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43232044 kB' 'MemAvailable: 46739492 kB' 'Buffers: 2704 kB' 'Cached: 11795356 kB' 'SwapCached: 0 kB' 'Active: 8816920 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528648 kB' 'Mapped: 208040 kB' 'Shmem: 7897156 kB' 'KReclaimable: 200332 kB' 'Slab: 576036 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375704 kB' 'KernelStack: 12768 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9534936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.821 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27709400 kB' 'MemUsed: 5120484 kB' 'SwapCached: 0 kB' 'Active: 3026580 kB' 'Inactive: 108448 kB' 'Active(anon): 2915692 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839764 kB' 'Mapped: 48788 kB' 'AnonPages: 298404 kB' 'Shmem: 2620428 kB' 'KernelStack: 7224 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311476 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 215896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15522644 kB' 'MemUsed: 12189180 kB' 'SwapCached: 0 kB' 'Active: 5790072 kB' 'Inactive: 3398104 kB' 'Active(anon): 5506608 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8958300 kB' 'Mapped: 159252 kB' 'AnonPages: 229956 kB' 'Shmem: 5276732 kB' 'KernelStack: 5544 kB' 'PageTables: 3124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104752 kB' 'Slab: 264560 kB' 'SReclaimable: 104752 kB' 'SUnreclaim: 159808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.824 node0=512 expecting 512 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:07.824 node1=1024 expecting 1024 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:07.824 00:04:07.824 real 0m1.404s 00:04:07.824 user 0m0.594s 00:04:07.824 sys 0m0.770s 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.824 15:15:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.824 ************************************ 00:04:07.824 END TEST custom_alloc 00:04:07.824 ************************************ 00:04:07.824 15:15:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.824 15:15:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:07.824 15:15:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.824 15:15:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.824 15:15:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.824 ************************************ 00:04:07.824 START TEST no_shrink_alloc 00:04:07.824 ************************************ 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.824 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.825 15:15:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.206 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.206 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.206 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.206 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.206 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.206 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.206 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.206 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.206 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.206 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:09.206 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:09.206 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:09.206 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:09.206 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:09.206 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:09.206 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:09.206 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44243708 kB' 'MemAvailable: 47751156 kB' 'Buffers: 2704 kB' 'Cached: 11795448 kB' 'SwapCached: 0 kB' 'Active: 8817180 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422828 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528856 kB' 'Mapped: 208184 kB' 'Shmem: 7897248 kB' 'KReclaimable: 200332 kB' 'Slab: 576008 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375676 kB' 'KernelStack: 12784 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9535204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.206 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.207 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44246016 kB' 'MemAvailable: 47753464 kB' 'Buffers: 2704 kB' 'Cached: 11795452 kB' 'SwapCached: 0 kB' 'Active: 8817748 kB' 'Inactive: 3506552 kB' 'Active(anon): 8423396 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529424 kB' 'Mapped: 208300 kB' 'Shmem: 7897252 kB' 'KReclaimable: 200332 kB' 'Slab: 576024 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375692 kB' 'KernelStack: 12816 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9535220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.208 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.209 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44246664 kB' 'MemAvailable: 47754112 kB' 'Buffers: 2704 kB' 'Cached: 11795472 kB' 'SwapCached: 0 kB' 'Active: 8816872 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422520 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528496 kB' 'Mapped: 208112 kB' 'Shmem: 7897272 kB' 'KReclaimable: 200332 kB' 'Slab: 575992 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375660 kB' 'KernelStack: 12784 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9535244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.210 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.211 nr_hugepages=1024 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.211 resv_hugepages=0 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.211 surplus_hugepages=0 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.211 anon_hugepages=0 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.211 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44246664 kB' 'MemAvailable: 47754112 kB' 'Buffers: 2704 kB' 'Cached: 11795472 kB' 'SwapCached: 0 kB' 'Active: 8816580 kB' 'Inactive: 3506552 kB' 'Active(anon): 8422228 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528212 kB' 'Mapped: 208112 kB' 'Shmem: 7897272 kB' 'KReclaimable: 200332 kB' 'Slab: 575992 kB' 'SReclaimable: 200332 kB' 'SUnreclaim: 375660 kB' 'KernelStack: 12784 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9535264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.212 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26649416 kB' 'MemUsed: 6180468 kB' 'SwapCached: 0 kB' 'Active: 3027420 kB' 'Inactive: 108448 kB' 'Active(anon): 2916532 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839872 kB' 'Mapped: 48800 kB' 'AnonPages: 299192 kB' 'Shmem: 2620536 kB' 'KernelStack: 7256 kB' 'PageTables: 4952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311388 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 215808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.213 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.214 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.215 node0=1024 expecting 1024 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.215 15:15:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.598 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.598 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.598 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.598 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.598 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.598 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.598 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.598 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.598 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.598 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:10.598 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:10.598 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:10.598 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:10.598 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:10.598 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:10.598 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:10.598 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:10.598 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44211124 kB' 'MemAvailable: 47718548 kB' 'Buffers: 2704 kB' 'Cached: 11795568 kB' 'SwapCached: 0 kB' 'Active: 8821780 kB' 'Inactive: 3506552 kB' 'Active(anon): 8427428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533316 kB' 'Mapped: 208628 kB' 'Shmem: 7897368 kB' 'KReclaimable: 200284 kB' 'Slab: 575712 kB' 'SReclaimable: 200284 kB' 'SUnreclaim: 375428 kB' 'KernelStack: 12768 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9540364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.598 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.599 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44208632 kB' 'MemAvailable: 47716056 kB' 'Buffers: 2704 kB' 'Cached: 11795572 kB' 'SwapCached: 0 kB' 'Active: 8823360 kB' 'Inactive: 3506552 kB' 'Active(anon): 8429008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534948 kB' 'Mapped: 209108 kB' 'Shmem: 7897372 kB' 'KReclaimable: 200284 kB' 'Slab: 575712 kB' 'SReclaimable: 200284 kB' 'SUnreclaim: 375428 kB' 'KernelStack: 12800 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9541716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.600 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44208132 kB' 'MemAvailable: 47715556 kB' 'Buffers: 2704 kB' 'Cached: 11795588 kB' 'SwapCached: 0 kB' 'Active: 8818808 kB' 'Inactive: 3506552 kB' 'Active(anon): 8424456 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530336 kB' 'Mapped: 209032 kB' 'Shmem: 7897388 kB' 'KReclaimable: 200284 kB' 'Slab: 575708 kB' 'SReclaimable: 200284 kB' 'SUnreclaim: 375424 kB' 'KernelStack: 12768 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9538160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.604 nr_hugepages=1024 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.604 resv_hugepages=0 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.604 surplus_hugepages=0 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.604 anon_hugepages=0 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44205512 kB' 'MemAvailable: 47712936 kB' 'Buffers: 2704 kB' 'Cached: 11795612 kB' 'SwapCached: 0 kB' 'Active: 8822960 kB' 'Inactive: 3506552 kB' 'Active(anon): 8428608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534536 kB' 'Mapped: 208832 kB' 'Shmem: 7897412 kB' 'KReclaimable: 200284 kB' 'Slab: 575708 kB' 'SReclaimable: 200284 kB' 'SUnreclaim: 375424 kB' 'KernelStack: 12800 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9541760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 15863808 kB' 'DirectMap1G: 51380224 kB' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.863 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26612068 kB' 'MemUsed: 6217816 kB' 'SwapCached: 0 kB' 'Active: 3026960 kB' 'Inactive: 108448 kB' 'Active(anon): 2916072 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2839956 kB' 'Mapped: 48996 kB' 'AnonPages: 298552 kB' 'Shmem: 2620620 kB' 'KernelStack: 7240 kB' 'PageTables: 4880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95580 kB' 'Slab: 311336 kB' 'SReclaimable: 95580 kB' 'SUnreclaim: 215756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.864 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.865 node0=1024 expecting 1024 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.865 00:04:10.865 real 0m2.834s 00:04:10.865 user 0m1.101s 00:04:10.865 sys 0m1.651s 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.865 15:15:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.865 ************************************ 00:04:10.865 END TEST no_shrink_alloc 00:04:10.865 ************************************ 00:04:10.865 15:15:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.865 15:15:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.865 00:04:10.865 real 0m11.190s 00:04:10.865 user 0m4.252s 00:04:10.865 sys 0m5.835s 00:04:10.865 15:15:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.865 15:15:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.865 ************************************ 00:04:10.865 END TEST hugepages 00:04:10.865 ************************************ 00:04:10.865 15:15:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.865 15:15:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.865 15:15:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.865 15:15:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.865 15:15:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.865 ************************************ 00:04:10.865 START TEST driver 00:04:10.865 ************************************ 00:04:10.865 15:15:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:10.865 * Looking for test storage... 00:04:10.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.865 15:15:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:10.865 15:15:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.865 15:15:41 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.391 15:15:43 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.391 15:15:43 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.391 15:15:43 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.391 15:15:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.391 ************************************ 00:04:13.391 START TEST guess_driver 00:04:13.391 ************************************ 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.391 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.391 Looking for driver=vfio-pci 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.391 15:15:43 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.766 15:15:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.702 15:15:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.234 00:04:18.234 real 0m4.905s 00:04:18.234 user 0m1.076s 00:04:18.234 sys 0m1.897s 00:04:18.234 15:15:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.234 15:15:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.234 ************************************ 00:04:18.234 END TEST guess_driver 00:04:18.234 ************************************ 00:04:18.234 15:15:48 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:18.234 00:04:18.234 real 0m7.417s 00:04:18.234 user 0m1.686s 00:04:18.234 sys 0m2.844s 00:04:18.234 15:15:48 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.234 15:15:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:18.234 ************************************ 00:04:18.234 END TEST driver 00:04:18.234 ************************************ 00:04:18.234 15:15:48 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:18.234 15:15:48 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:18.234 15:15:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.234 15:15:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.234 15:15:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.234 ************************************ 00:04:18.234 START TEST devices 00:04:18.234 ************************************ 00:04:18.234 15:15:48 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:18.234 * Looking for test storage... 00:04:18.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.234 15:15:48 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:18.234 15:15:48 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:18.234 15:15:48 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.234 15:15:48 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:20.137 15:15:50 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:20.137 No valid GPT data, bailing 00:04:20.137 15:15:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:20.137 15:15:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:20.137 15:15:50 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:20.137 15:15:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.137 15:15:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:20.137 ************************************ 00:04:20.137 START TEST nvme_mount 00:04:20.137 ************************************ 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:20.137 15:15:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:21.075 Creating new GPT entries in memory. 00:04:21.075 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:21.075 other utilities. 00:04:21.075 15:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:21.075 15:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.075 15:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:21.075 15:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:21.075 15:15:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:22.013 Creating new GPT entries in memory. 00:04:22.013 The operation has completed successfully. 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 961424 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.013 15:15:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:22.952 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.210 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.210 15:15:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.468 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.468 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.468 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.468 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.468 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:23.468 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:23.468 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.468 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:23.468 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:23.468 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.727 15:15:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.659 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.659 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.659 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.659 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.659 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:24.660 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.917 15:15:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.293 15:15:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.293 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.293 00:04:26.293 real 0m6.272s 00:04:26.293 user 0m1.432s 00:04:26.293 sys 0m2.398s 00:04:26.294 15:15:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.294 15:15:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:26.294 ************************************ 00:04:26.294 END TEST nvme_mount 00:04:26.294 ************************************ 00:04:26.294 15:15:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:26.294 15:15:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:26.294 15:15:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.294 15:15:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.294 15:15:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.294 ************************************ 00:04:26.294 START TEST dm_mount 00:04:26.294 ************************************ 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.294 15:15:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:27.230 Creating new GPT entries in memory. 00:04:27.230 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.230 other utilities. 00:04:27.230 15:15:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.230 15:15:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.230 15:15:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.230 15:15:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.230 15:15:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.166 Creating new GPT entries in memory. 00:04:28.166 The operation has completed successfully. 00:04:28.425 15:15:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.425 15:15:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.425 15:15:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.425 15:15:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.425 15:15:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:29.361 The operation has completed successfully. 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 963808 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:29.361 15:15:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.361 15:16:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.735 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.736 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:30.736 15:16:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.736 15:16:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.736 15:16:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:31.670 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:31.929 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:31.929 00:04:31.929 real 0m5.684s 00:04:31.929 user 0m0.952s 00:04:31.929 sys 0m1.578s 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.929 15:16:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:31.929 ************************************ 00:04:31.929 END TEST dm_mount 00:04:31.929 ************************************ 00:04:31.929 15:16:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.929 15:16:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.187 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:32.187 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:32.187 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.187 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.187 15:16:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:32.187 00:04:32.187 real 0m13.957s 00:04:32.187 user 0m3.075s 00:04:32.187 sys 0m5.053s 00:04:32.187 15:16:02 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.187 15:16:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:32.187 ************************************ 00:04:32.187 END TEST devices 00:04:32.187 ************************************ 00:04:32.187 15:16:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:32.187 00:04:32.187 real 0m43.193s 00:04:32.187 user 0m12.274s 00:04:32.187 sys 0m19.111s 00:04:32.187 15:16:02 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.187 15:16:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.187 ************************************ 00:04:32.187 END TEST setup.sh 00:04:32.187 ************************************ 00:04:32.187 15:16:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:32.187 15:16:02 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:33.559 Hugepages 00:04:33.559 node hugesize free / total 00:04:33.559 node0 1048576kB 0 / 0 00:04:33.559 node0 2048kB 2048 / 2048 00:04:33.559 node1 1048576kB 0 / 0 00:04:33.559 node1 2048kB 0 / 0 00:04:33.559 00:04:33.559 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.559 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:33.559 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:33.559 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:33.559 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:33.559 15:16:04 -- spdk/autotest.sh@130 -- # uname -s 00:04:33.559 15:16:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:33.559 15:16:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:33.559 15:16:04 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:34.935 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:34.935 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:34.935 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.870 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.870 15:16:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:36.805 15:16:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:36.805 15:16:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:36.805 15:16:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:36.805 15:16:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:36.805 15:16:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:36.805 15:16:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:36.805 15:16:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.805 15:16:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:36.805 15:16:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:37.061 15:16:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:37.061 15:16:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:37.061 15:16:07 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.085 Waiting for block devices as requested 00:04:38.085 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:38.343 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:38.343 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:38.343 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:38.600 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:38.600 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:38.600 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:38.600 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:38.858 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:38.858 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:38.858 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:38.858 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:39.116 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:39.116 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:39.116 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:39.375 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:39.375 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:39.375 15:16:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:39.376 15:16:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:39.376 15:16:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:39.376 15:16:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:39.376 15:16:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:39.376 15:16:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:39.376 15:16:10 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:39.376 15:16:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:39.376 15:16:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:39.376 15:16:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:39.376 15:16:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:39.376 15:16:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:39.376 15:16:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:39.376 15:16:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:39.376 15:16:10 -- common/autotest_common.sh@1557 -- # continue 00:04:39.376 15:16:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:39.376 15:16:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.376 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:04:39.376 15:16:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.376 15:16:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.376 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:04:39.376 15:16:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.751 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.751 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:40.751 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:41.687 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.945 15:16:12 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:41.945 15:16:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.945 15:16:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.945 15:16:12 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:41.945 15:16:12 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:41.945 15:16:12 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.945 15:16:12 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:41.945 15:16:12 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:41.945 15:16:12 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:41.945 15:16:12 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:41.945 15:16:12 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:41.945 15:16:12 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.946 15:16:12 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.946 15:16:12 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:41.946 15:16:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:41.946 15:16:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:41.946 15:16:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.946 15:16:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:41.946 15:16:12 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:41.946 15:16:12 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:41.946 15:16:12 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:41.946 15:16:12 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:41.946 15:16:12 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:41.946 15:16:12 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=968987 00:04:41.946 15:16:12 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.946 15:16:12 -- common/autotest_common.sh@1598 -- # waitforlisten 968987 00:04:41.946 15:16:12 -- common/autotest_common.sh@829 -- # '[' -z 968987 ']' 00:04:41.946 15:16:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.946 15:16:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.946 15:16:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.946 15:16:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.946 15:16:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.946 [2024-07-13 15:16:12.617615] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:41.946 [2024-07-13 15:16:12.617693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968987 ] 00:04:41.946 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.946 [2024-07-13 15:16:12.648987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:41.946 [2024-07-13 15:16:12.681008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.204 [2024-07-13 15:16:12.771498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.462 15:16:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.462 15:16:13 -- common/autotest_common.sh@862 -- # return 0 00:04:42.462 15:16:13 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:42.462 15:16:13 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:42.462 15:16:13 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:45.744 nvme0n1 00:04:45.744 15:16:16 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:45.744 [2024-07-13 15:16:16.327453] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:45.744 [2024-07-13 15:16:16.327503] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:45.744 request: 00:04:45.744 { 00:04:45.744 "nvme_ctrlr_name": "nvme0", 00:04:45.744 "password": "test", 00:04:45.744 "method": "bdev_nvme_opal_revert", 00:04:45.744 "req_id": 1 00:04:45.744 } 00:04:45.744 Got JSON-RPC error response 00:04:45.744 response: 00:04:45.744 { 00:04:45.744 "code": -32603, 00:04:45.744 "message": "Internal error" 00:04:45.744 } 00:04:45.744 15:16:16 -- common/autotest_common.sh@1604 -- # true 00:04:45.744 15:16:16 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:45.744 15:16:16 -- common/autotest_common.sh@1608 -- # killprocess 968987 00:04:45.744 15:16:16 -- common/autotest_common.sh@948 -- # '[' -z 968987 ']' 00:04:45.744 15:16:16 -- common/autotest_common.sh@952 -- # kill -0 968987 00:04:45.744 15:16:16 -- common/autotest_common.sh@953 -- # uname 00:04:45.744 15:16:16 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.744 15:16:16 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 968987 00:04:45.744 15:16:16 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.744 15:16:16 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.744 15:16:16 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 968987' 00:04:45.744 killing process with pid 968987 00:04:45.744 15:16:16 -- common/autotest_common.sh@967 -- # kill 968987 00:04:45.744 15:16:16 -- common/autotest_common.sh@972 -- # wait 968987 00:04:47.644 15:16:18 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:47.644 15:16:18 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:47.644 15:16:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:47.644 15:16:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:47.644 15:16:18 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:47.644 15:16:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.644 15:16:18 -- common/autotest_common.sh@10 -- # set +x 00:04:47.644 15:16:18 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:47.644 15:16:18 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.644 15:16:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.644 15:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.644 15:16:18 -- common/autotest_common.sh@10 -- # set +x 00:04:47.644 ************************************ 00:04:47.644 START TEST env 00:04:47.644 ************************************ 00:04:47.644 15:16:18 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.644 * Looking for test storage... 00:04:47.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:47.644 15:16:18 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:47.644 15:16:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.644 15:16:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.644 15:16:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.644 ************************************ 00:04:47.644 START TEST env_memory 00:04:47.644 ************************************ 00:04:47.644 15:16:18 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:47.644 00:04:47.644 00:04:47.644 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.644 http://cunit.sourceforge.net/ 00:04:47.644 00:04:47.644 00:04:47.644 Suite: memory 00:04:47.644 Test: alloc and free memory map ...[2024-07-13 15:16:18.317646] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:47.644 passed 00:04:47.644 Test: mem map translation ...[2024-07-13 15:16:18.342286] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:47.644 [2024-07-13 15:16:18.342313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:47.644 [2024-07-13 15:16:18.342366] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:47.644 [2024-07-13 15:16:18.342381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:47.644 passed 00:04:47.644 Test: mem map registration ...[2024-07-13 15:16:18.394632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:47.644 [2024-07-13 15:16:18.394656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:47.902 passed 00:04:47.902 Test: mem map adjacent registrations ...passed 00:04:47.902 00:04:47.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.902 suites 1 1 n/a 0 0 00:04:47.902 tests 4 4 4 0 0 00:04:47.902 asserts 152 152 152 0 n/a 00:04:47.902 00:04:47.902 Elapsed time = 0.173 seconds 00:04:47.902 00:04:47.902 real 0m0.181s 00:04:47.902 user 0m0.172s 00:04:47.902 sys 0m0.008s 00:04:47.902 15:16:18 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.902 15:16:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:47.902 ************************************ 00:04:47.902 END TEST env_memory 00:04:47.902 ************************************ 00:04:47.902 15:16:18 env -- common/autotest_common.sh@1142 -- # return 0 00:04:47.902 15:16:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:47.902 15:16:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.902 15:16:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.902 15:16:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.902 ************************************ 00:04:47.902 START TEST env_vtophys 00:04:47.902 ************************************ 00:04:47.902 15:16:18 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:47.902 EAL: lib.eal log level changed from notice to debug 00:04:47.902 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.902 EAL: Detected lcore 1 as core 1 on socket 0 00:04:47.902 EAL: Detected lcore 2 as core 2 on socket 0 00:04:47.902 EAL: Detected lcore 3 as core 3 on socket 0 00:04:47.902 EAL: Detected lcore 4 as core 4 on socket 0 00:04:47.902 EAL: Detected lcore 5 as core 5 on socket 0 00:04:47.902 EAL: Detected lcore 6 as core 8 on socket 0 00:04:47.902 EAL: Detected lcore 7 as core 9 on socket 0 00:04:47.902 EAL: Detected lcore 8 as core 10 on socket 0 00:04:47.902 EAL: Detected lcore 9 as core 11 on socket 0 00:04:47.902 EAL: Detected lcore 10 as core 12 on socket 0 00:04:47.902 EAL: Detected lcore 11 as core 13 on socket 0 00:04:47.902 EAL: Detected lcore 12 as core 0 on socket 1 00:04:47.902 EAL: Detected lcore 13 as core 1 on socket 1 00:04:47.902 EAL: Detected lcore 14 as core 2 on socket 1 00:04:47.902 EAL: Detected lcore 15 as core 3 on socket 1 00:04:47.902 EAL: Detected lcore 16 as core 4 on socket 1 00:04:47.902 EAL: Detected lcore 17 as core 5 on socket 1 00:04:47.902 EAL: Detected lcore 18 as core 8 on socket 1 00:04:47.902 EAL: Detected lcore 19 as core 9 on socket 1 00:04:47.902 EAL: Detected lcore 20 as core 10 on socket 1 00:04:47.902 EAL: Detected lcore 21 as core 11 on socket 1 00:04:47.902 EAL: Detected lcore 22 as core 12 on socket 1 00:04:47.902 EAL: Detected lcore 23 as core 13 on socket 1 00:04:47.902 EAL: Detected lcore 24 as core 0 on socket 0 00:04:47.902 EAL: Detected lcore 25 as core 1 on socket 0 00:04:47.902 EAL: Detected lcore 26 as core 2 on socket 0 00:04:47.902 EAL: Detected lcore 27 as core 3 on socket 0 00:04:47.902 EAL: Detected lcore 28 as core 4 on socket 0 00:04:47.902 EAL: Detected lcore 29 as core 5 on socket 0 00:04:47.902 EAL: Detected lcore 30 as core 8 on socket 0 00:04:47.902 EAL: Detected lcore 31 as core 9 on socket 0 00:04:47.902 EAL: Detected lcore 32 as core 10 on socket 0 00:04:47.902 EAL: Detected lcore 33 as core 11 on socket 0 00:04:47.902 EAL: Detected lcore 34 as core 12 on socket 0 00:04:47.902 EAL: Detected lcore 35 as core 13 on socket 0 00:04:47.902 EAL: Detected lcore 36 as core 0 on socket 1 00:04:47.902 EAL: Detected lcore 37 as core 1 on socket 1 00:04:47.902 EAL: Detected lcore 38 as core 2 on socket 1 00:04:47.902 EAL: Detected lcore 39 as core 3 on socket 1 00:04:47.902 EAL: Detected lcore 40 as core 4 on socket 1 00:04:47.902 EAL: Detected lcore 41 as core 5 on socket 1 00:04:47.902 EAL: Detected lcore 42 as core 8 on socket 1 00:04:47.902 EAL: Detected lcore 43 as core 9 on socket 1 00:04:47.902 EAL: Detected lcore 44 as core 10 on socket 1 00:04:47.902 EAL: Detected lcore 45 as core 11 on socket 1 00:04:47.903 EAL: Detected lcore 46 as core 12 on socket 1 00:04:47.903 EAL: Detected lcore 47 as core 13 on socket 1 00:04:47.903 EAL: Maximum logical cores by configuration: 128 00:04:47.903 EAL: Detected CPU lcores: 48 00:04:47.903 EAL: Detected NUMA nodes: 2 00:04:47.903 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:47.903 EAL: Detected shared linkage of DPDK 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:47.903 EAL: Registered [vdev] bus. 00:04:47.903 EAL: bus.vdev log level changed from disabled to notice 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:47.903 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:47.903 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:47.903 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:47.903 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Bus pci wants IOVA as 'DC' 00:04:47.903 EAL: Bus vdev wants IOVA as 'DC' 00:04:47.903 EAL: Buses did not request a specific IOVA mode. 00:04:47.903 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:47.903 EAL: Selected IOVA mode 'VA' 00:04:47.903 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.903 EAL: Probing VFIO support... 00:04:47.903 EAL: IOMMU type 1 (Type 1) is supported 00:04:47.903 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:47.903 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:47.903 EAL: VFIO support initialized 00:04:47.903 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.903 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.903 EAL: Setting up physically contiguous memory... 00:04:47.903 EAL: Setting maximum number of open files to 524288 00:04:47.903 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.903 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:47.903 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.903 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:47.903 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.903 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:47.903 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:47.903 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.903 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:47.903 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:47.903 EAL: Hugepages will be freed exactly as allocated. 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: TSC frequency is ~2700000 KHz 00:04:47.903 EAL: Main lcore 0 is ready (tid=7f1afcf79a00;cpuset=[0]) 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 0 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.903 00:04:47.903 00:04:47.903 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.903 http://cunit.sourceforge.net/ 00:04:47.903 00:04:47.903 00:04:47.903 Suite: components_suite 00:04:47.903 Test: vtophys_malloc_test ...passed 00:04:47.903 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 4 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 4 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 4 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 4 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 4 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.903 EAL: Restoring previous memory policy: 4 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.903 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.903 EAL: request: mp_malloc_sync 00:04:47.903 EAL: No shared files mode enabled, IPC is disabled 00:04:47.903 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.903 EAL: Trying to obtain current memory policy. 00:04:47.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.160 EAL: Restoring previous memory policy: 4 00:04:48.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.160 EAL: request: mp_malloc_sync 00:04:48.160 EAL: No shared files mode enabled, IPC is disabled 00:04:48.160 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.160 EAL: request: mp_malloc_sync 00:04:48.160 EAL: No shared files mode enabled, IPC is disabled 00:04:48.160 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.160 EAL: Trying to obtain current memory policy. 00:04:48.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.160 EAL: Restoring previous memory policy: 4 00:04:48.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.160 EAL: request: mp_malloc_sync 00:04:48.160 EAL: No shared files mode enabled, IPC is disabled 00:04:48.160 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.160 EAL: request: mp_malloc_sync 00:04:48.160 EAL: No shared files mode enabled, IPC is disabled 00:04:48.160 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.160 EAL: Trying to obtain current memory policy. 00:04:48.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.416 EAL: Restoring previous memory policy: 4 00:04:48.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.416 EAL: request: mp_malloc_sync 00:04:48.416 EAL: No shared files mode enabled, IPC is disabled 00:04:48.416 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.673 EAL: request: mp_malloc_sync 00:04:48.673 EAL: No shared files mode enabled, IPC is disabled 00:04:48.673 EAL: Heap on socket 0 was shrunk by 514MB 00:04:48.673 EAL: Trying to obtain current memory policy. 00:04:48.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.931 EAL: Restoring previous memory policy: 4 00:04:48.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.931 EAL: request: mp_malloc_sync 00:04:48.931 EAL: No shared files mode enabled, IPC is disabled 00:04:48.931 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.446 EAL: request: mp_malloc_sync 00:04:49.446 EAL: No shared files mode enabled, IPC is disabled 00:04:49.446 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.446 passed 00:04:49.446 00:04:49.446 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.446 suites 1 1 n/a 0 0 00:04:49.446 tests 2 2 2 0 0 00:04:49.446 asserts 497 497 497 0 n/a 00:04:49.446 00:04:49.446 Elapsed time = 1.400 seconds 00:04:49.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.446 EAL: request: mp_malloc_sync 00:04:49.446 EAL: No shared files mode enabled, IPC is disabled 00:04:49.446 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.446 EAL: No shared files mode enabled, IPC is disabled 00:04:49.446 EAL: No shared files mode enabled, IPC is disabled 00:04:49.446 EAL: No shared files mode enabled, IPC is disabled 00:04:49.446 00:04:49.446 real 0m1.518s 00:04:49.446 user 0m0.871s 00:04:49.446 sys 0m0.612s 00:04:49.446 15:16:20 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.446 15:16:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:49.446 ************************************ 00:04:49.446 END TEST env_vtophys 00:04:49.446 ************************************ 00:04:49.446 15:16:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.446 15:16:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.446 15:16:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.446 15:16:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.446 15:16:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.446 ************************************ 00:04:49.446 START TEST env_pci 00:04:49.446 ************************************ 00:04:49.446 15:16:20 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.446 00:04:49.446 00:04:49.446 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.446 http://cunit.sourceforge.net/ 00:04:49.446 00:04:49.446 00:04:49.446 Suite: pci 00:04:49.446 Test: pci_hook ...[2024-07-13 15:16:20.086220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 969875 has claimed it 00:04:49.446 EAL: Cannot find device (10000:00:01.0) 00:04:49.446 EAL: Failed to attach device on primary process 00:04:49.446 passed 00:04:49.446 00:04:49.446 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.446 suites 1 1 n/a 0 0 00:04:49.446 tests 1 1 1 0 0 00:04:49.446 asserts 25 25 25 0 n/a 00:04:49.446 00:04:49.446 Elapsed time = 0.022 seconds 00:04:49.446 00:04:49.446 real 0m0.034s 00:04:49.446 user 0m0.010s 00:04:49.446 sys 0m0.025s 00:04:49.446 15:16:20 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.446 15:16:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:49.446 ************************************ 00:04:49.446 END TEST env_pci 00:04:49.446 ************************************ 00:04:49.446 15:16:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.446 15:16:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.446 15:16:20 env -- env/env.sh@15 -- # uname 00:04:49.446 15:16:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.446 15:16:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.446 15:16:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.446 15:16:20 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:49.446 15:16:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.446 15:16:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.446 ************************************ 00:04:49.446 START TEST env_dpdk_post_init 00:04:49.446 ************************************ 00:04:49.446 15:16:20 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.446 EAL: Detected CPU lcores: 48 00:04:49.446 EAL: Detected NUMA nodes: 2 00:04:49.446 EAL: Detected shared linkage of DPDK 00:04:49.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.446 EAL: Selected IOVA mode 'VA' 00:04:49.446 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.446 EAL: VFIO support initialized 00:04:49.704 EAL: Using IOMMU type 1 (Type 1) 00:04:53.888 Starting DPDK initialization... 00:04:53.888 Starting SPDK post initialization... 00:04:53.888 SPDK NVMe probe 00:04:53.888 Attaching to 0000:88:00.0 00:04:53.888 Attached to 0000:88:00.0 00:04:53.888 Cleaning up... 00:04:53.888 00:04:53.888 real 0m4.401s 00:04:53.888 user 0m3.269s 00:04:53.888 sys 0m0.197s 00:04:53.888 15:16:24 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.888 15:16:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.888 ************************************ 00:04:53.888 END TEST env_dpdk_post_init 00:04:53.888 ************************************ 00:04:53.888 15:16:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:53.888 15:16:24 env -- env/env.sh@26 -- # uname 00:04:53.888 15:16:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:53.888 15:16:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:53.888 15:16:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.888 15:16:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.888 15:16:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.888 ************************************ 00:04:53.888 START TEST env_mem_callbacks 00:04:53.888 ************************************ 00:04:53.888 15:16:24 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:53.888 EAL: Detected CPU lcores: 48 00:04:53.888 EAL: Detected NUMA nodes: 2 00:04:53.888 EAL: Detected shared linkage of DPDK 00:04:53.888 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.888 EAL: Selected IOVA mode 'VA' 00:04:53.888 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.888 EAL: VFIO support initialized 00:04:54.146 00:04:54.146 00:04:54.146 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.146 http://cunit.sourceforge.net/ 00:04:54.146 00:04:54.146 00:04:54.146 Suite: memory 00:04:54.146 Test: test ... 00:04:54.146 register 0x200000200000 2097152 00:04:54.146 malloc 3145728 00:04:54.146 register 0x200000400000 4194304 00:04:54.146 buf 0x200000500000 len 3145728 PASSED 00:04:54.146 malloc 64 00:04:54.146 buf 0x2000004fff40 len 64 PASSED 00:04:54.146 malloc 4194304 00:04:54.146 register 0x200000800000 6291456 00:04:54.146 buf 0x200000a00000 len 4194304 PASSED 00:04:54.146 free 0x200000500000 3145728 00:04:54.146 free 0x2000004fff40 64 00:04:54.146 unregister 0x200000400000 4194304 PASSED 00:04:54.146 free 0x200000a00000 4194304 00:04:54.146 unregister 0x200000800000 6291456 PASSED 00:04:54.146 malloc 8388608 00:04:54.146 register 0x200000400000 10485760 00:04:54.146 buf 0x200000600000 len 8388608 PASSED 00:04:54.146 free 0x200000600000 8388608 00:04:54.146 unregister 0x200000400000 10485760 PASSED 00:04:54.146 passed 00:04:54.146 00:04:54.146 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.146 suites 1 1 n/a 0 0 00:04:54.146 tests 1 1 1 0 0 00:04:54.146 asserts 15 15 15 0 n/a 00:04:54.146 00:04:54.146 Elapsed time = 0.005 seconds 00:04:54.146 00:04:54.146 real 0m0.050s 00:04:54.146 user 0m0.017s 00:04:54.146 sys 0m0.032s 00:04:54.146 15:16:24 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.146 15:16:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:54.146 ************************************ 00:04:54.146 END TEST env_mem_callbacks 00:04:54.146 ************************************ 00:04:54.146 15:16:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:54.146 00:04:54.146 real 0m6.479s 00:04:54.146 user 0m4.464s 00:04:54.146 sys 0m1.060s 00:04:54.146 15:16:24 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.146 15:16:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.146 ************************************ 00:04:54.146 END TEST env 00:04:54.146 ************************************ 00:04:54.146 15:16:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.146 15:16:24 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.146 15:16:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.146 15:16:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.146 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:04:54.146 ************************************ 00:04:54.146 START TEST rpc 00:04:54.146 ************************************ 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.146 * Looking for test storage... 00:04:54.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.146 15:16:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=970609 00:04:54.146 15:16:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:54.146 15:16:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.146 15:16:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 970609 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@829 -- # '[' -z 970609 ']' 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.146 15:16:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.146 [2024-07-13 15:16:24.833087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:54.146 [2024-07-13 15:16:24.833200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970609 ] 00:04:54.146 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.146 [2024-07-13 15:16:24.864463] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.146 [2024-07-13 15:16:24.890567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.404 [2024-07-13 15:16:24.977410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:54.404 [2024-07-13 15:16:24.977477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 970609' to capture a snapshot of events at runtime. 00:04:54.404 [2024-07-13 15:16:24.977505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.404 [2024-07-13 15:16:24.977516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.404 [2024-07-13 15:16:24.977526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid970609 for offline analysis/debug. 00:04:54.404 [2024-07-13 15:16:24.977562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.664 15:16:25 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.664 15:16:25 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:54.664 15:16:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.664 15:16:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.664 15:16:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:54.664 15:16:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:54.664 15:16:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.664 15:16:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.664 15:16:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.664 ************************************ 00:04:54.664 START TEST rpc_integrity 00:04:54.664 ************************************ 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.664 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.664 { 00:04:54.664 "name": "Malloc0", 00:04:54.664 "aliases": [ 00:04:54.664 "90aba534-6566-4dc4-a69a-46a6a5d511e4" 00:04:54.664 ], 00:04:54.664 "product_name": "Malloc disk", 00:04:54.664 "block_size": 512, 00:04:54.664 "num_blocks": 16384, 00:04:54.664 "uuid": "90aba534-6566-4dc4-a69a-46a6a5d511e4", 00:04:54.664 "assigned_rate_limits": { 00:04:54.664 "rw_ios_per_sec": 0, 00:04:54.664 "rw_mbytes_per_sec": 0, 00:04:54.664 "r_mbytes_per_sec": 0, 00:04:54.664 "w_mbytes_per_sec": 0 00:04:54.664 }, 00:04:54.664 "claimed": false, 00:04:54.664 "zoned": false, 00:04:54.664 "supported_io_types": { 00:04:54.664 "read": true, 00:04:54.664 "write": true, 00:04:54.664 "unmap": true, 00:04:54.664 "flush": true, 00:04:54.664 "reset": true, 00:04:54.664 "nvme_admin": false, 00:04:54.664 "nvme_io": false, 00:04:54.664 "nvme_io_md": false, 00:04:54.664 "write_zeroes": true, 00:04:54.664 "zcopy": true, 00:04:54.664 "get_zone_info": false, 00:04:54.664 "zone_management": false, 00:04:54.664 "zone_append": false, 00:04:54.664 "compare": false, 00:04:54.664 "compare_and_write": false, 00:04:54.664 "abort": true, 00:04:54.664 "seek_hole": false, 00:04:54.664 "seek_data": false, 00:04:54.664 "copy": true, 00:04:54.664 "nvme_iov_md": false 00:04:54.664 }, 00:04:54.664 "memory_domains": [ 00:04:54.664 { 00:04:54.664 "dma_device_id": "system", 00:04:54.664 "dma_device_type": 1 00:04:54.664 }, 00:04:54.664 { 00:04:54.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.664 "dma_device_type": 2 00:04:54.664 } 00:04:54.664 ], 00:04:54.664 "driver_specific": {} 00:04:54.664 } 00:04:54.664 ]' 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:54.664 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.665 [2024-07-13 15:16:25.359059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:54.665 [2024-07-13 15:16:25.359099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.665 [2024-07-13 15:16:25.359122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18b27f0 00:04:54.665 [2024-07-13 15:16:25.359136] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.665 [2024-07-13 15:16:25.360649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.665 [2024-07-13 15:16:25.360676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.665 Passthru0 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.665 { 00:04:54.665 "name": "Malloc0", 00:04:54.665 "aliases": [ 00:04:54.665 "90aba534-6566-4dc4-a69a-46a6a5d511e4" 00:04:54.665 ], 00:04:54.665 "product_name": "Malloc disk", 00:04:54.665 "block_size": 512, 00:04:54.665 "num_blocks": 16384, 00:04:54.665 "uuid": "90aba534-6566-4dc4-a69a-46a6a5d511e4", 00:04:54.665 "assigned_rate_limits": { 00:04:54.665 "rw_ios_per_sec": 0, 00:04:54.665 "rw_mbytes_per_sec": 0, 00:04:54.665 "r_mbytes_per_sec": 0, 00:04:54.665 "w_mbytes_per_sec": 0 00:04:54.665 }, 00:04:54.665 "claimed": true, 00:04:54.665 "claim_type": "exclusive_write", 00:04:54.665 "zoned": false, 00:04:54.665 "supported_io_types": { 00:04:54.665 "read": true, 00:04:54.665 "write": true, 00:04:54.665 "unmap": true, 00:04:54.665 "flush": true, 00:04:54.665 "reset": true, 00:04:54.665 "nvme_admin": false, 00:04:54.665 "nvme_io": false, 00:04:54.665 "nvme_io_md": false, 00:04:54.665 "write_zeroes": true, 00:04:54.665 "zcopy": true, 00:04:54.665 "get_zone_info": false, 00:04:54.665 "zone_management": false, 00:04:54.665 "zone_append": false, 00:04:54.665 "compare": false, 00:04:54.665 "compare_and_write": false, 00:04:54.665 "abort": true, 00:04:54.665 "seek_hole": false, 00:04:54.665 "seek_data": false, 00:04:54.665 "copy": true, 00:04:54.665 "nvme_iov_md": false 00:04:54.665 }, 00:04:54.665 "memory_domains": [ 00:04:54.665 { 00:04:54.665 "dma_device_id": "system", 00:04:54.665 "dma_device_type": 1 00:04:54.665 }, 00:04:54.665 { 00:04:54.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.665 "dma_device_type": 2 00:04:54.665 } 00:04:54.665 ], 00:04:54.665 "driver_specific": {} 00:04:54.665 }, 00:04:54.665 { 00:04:54.665 "name": "Passthru0", 00:04:54.665 "aliases": [ 00:04:54.665 "71964d2b-88fb-545e-9232-7249e6714eaf" 00:04:54.665 ], 00:04:54.665 "product_name": "passthru", 00:04:54.665 "block_size": 512, 00:04:54.665 "num_blocks": 16384, 00:04:54.665 "uuid": "71964d2b-88fb-545e-9232-7249e6714eaf", 00:04:54.665 "assigned_rate_limits": { 00:04:54.665 "rw_ios_per_sec": 0, 00:04:54.665 "rw_mbytes_per_sec": 0, 00:04:54.665 "r_mbytes_per_sec": 0, 00:04:54.665 "w_mbytes_per_sec": 0 00:04:54.665 }, 00:04:54.665 "claimed": false, 00:04:54.665 "zoned": false, 00:04:54.665 "supported_io_types": { 00:04:54.665 "read": true, 00:04:54.665 "write": true, 00:04:54.665 "unmap": true, 00:04:54.665 "flush": true, 00:04:54.665 "reset": true, 00:04:54.665 "nvme_admin": false, 00:04:54.665 "nvme_io": false, 00:04:54.665 "nvme_io_md": false, 00:04:54.665 "write_zeroes": true, 00:04:54.665 "zcopy": true, 00:04:54.665 "get_zone_info": false, 00:04:54.665 "zone_management": false, 00:04:54.665 "zone_append": false, 00:04:54.665 "compare": false, 00:04:54.665 "compare_and_write": false, 00:04:54.665 "abort": true, 00:04:54.665 "seek_hole": false, 00:04:54.665 "seek_data": false, 00:04:54.665 "copy": true, 00:04:54.665 "nvme_iov_md": false 00:04:54.665 }, 00:04:54.665 "memory_domains": [ 00:04:54.665 { 00:04:54.665 "dma_device_id": "system", 00:04:54.665 "dma_device_type": 1 00:04:54.665 }, 00:04:54.665 { 00:04:54.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.665 "dma_device_type": 2 00:04:54.665 } 00:04:54.665 ], 00:04:54.665 "driver_specific": { 00:04:54.665 "passthru": { 00:04:54.665 "name": "Passthru0", 00:04:54.665 "base_bdev_name": "Malloc0" 00:04:54.665 } 00:04:54.665 } 00:04:54.665 } 00:04:54.665 ]' 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.665 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.665 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.964 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.964 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:54.964 15:16:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.964 00:04:54.964 real 0m0.230s 00:04:54.964 user 0m0.155s 00:04:54.964 sys 0m0.018s 00:04:54.964 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 ************************************ 00:04:54.964 END TEST rpc_integrity 00:04:54.964 ************************************ 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.964 15:16:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 ************************************ 00:04:54.964 START TEST rpc_plugins 00:04:54.964 ************************************ 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:54.964 { 00:04:54.964 "name": "Malloc1", 00:04:54.964 "aliases": [ 00:04:54.964 "c6005f35-6344-442b-9431-3ce8d8618f25" 00:04:54.964 ], 00:04:54.964 "product_name": "Malloc disk", 00:04:54.964 "block_size": 4096, 00:04:54.964 "num_blocks": 256, 00:04:54.964 "uuid": "c6005f35-6344-442b-9431-3ce8d8618f25", 00:04:54.964 "assigned_rate_limits": { 00:04:54.964 "rw_ios_per_sec": 0, 00:04:54.964 "rw_mbytes_per_sec": 0, 00:04:54.964 "r_mbytes_per_sec": 0, 00:04:54.964 "w_mbytes_per_sec": 0 00:04:54.964 }, 00:04:54.964 "claimed": false, 00:04:54.964 "zoned": false, 00:04:54.964 "supported_io_types": { 00:04:54.964 "read": true, 00:04:54.964 "write": true, 00:04:54.964 "unmap": true, 00:04:54.964 "flush": true, 00:04:54.964 "reset": true, 00:04:54.964 "nvme_admin": false, 00:04:54.964 "nvme_io": false, 00:04:54.964 "nvme_io_md": false, 00:04:54.964 "write_zeroes": true, 00:04:54.964 "zcopy": true, 00:04:54.964 "get_zone_info": false, 00:04:54.964 "zone_management": false, 00:04:54.964 "zone_append": false, 00:04:54.964 "compare": false, 00:04:54.964 "compare_and_write": false, 00:04:54.964 "abort": true, 00:04:54.964 "seek_hole": false, 00:04:54.964 "seek_data": false, 00:04:54.964 "copy": true, 00:04:54.964 "nvme_iov_md": false 00:04:54.964 }, 00:04:54.964 "memory_domains": [ 00:04:54.964 { 00:04:54.964 "dma_device_id": "system", 00:04:54.964 "dma_device_type": 1 00:04:54.964 }, 00:04:54.964 { 00:04:54.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.964 "dma_device_type": 2 00:04:54.964 } 00:04:54.964 ], 00:04:54.964 "driver_specific": {} 00:04:54.964 } 00:04:54.964 ]' 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:54.964 15:16:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:54.964 00:04:54.964 real 0m0.113s 00:04:54.964 user 0m0.077s 00:04:54.964 sys 0m0.010s 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 ************************************ 00:04:54.964 END TEST rpc_plugins 00:04:54.964 ************************************ 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.964 15:16:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.964 15:16:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 ************************************ 00:04:54.964 START TEST rpc_trace_cmd_test 00:04:54.964 ************************************ 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.964 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:54.964 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid970609", 00:04:54.964 "tpoint_group_mask": "0x8", 00:04:54.964 "iscsi_conn": { 00:04:54.964 "mask": "0x2", 00:04:54.964 "tpoint_mask": "0x0" 00:04:54.964 }, 00:04:54.964 "scsi": { 00:04:54.964 "mask": "0x4", 00:04:54.964 "tpoint_mask": "0x0" 00:04:54.964 }, 00:04:54.964 "bdev": { 00:04:54.964 "mask": "0x8", 00:04:54.964 "tpoint_mask": "0xffffffffffffffff" 00:04:54.964 }, 00:04:54.964 "nvmf_rdma": { 00:04:54.964 "mask": "0x10", 00:04:54.964 "tpoint_mask": "0x0" 00:04:54.964 }, 00:04:54.964 "nvmf_tcp": { 00:04:54.964 "mask": "0x20", 00:04:54.964 "tpoint_mask": "0x0" 00:04:54.964 }, 00:04:54.964 "ftl": { 00:04:54.965 "mask": "0x40", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "blobfs": { 00:04:54.965 "mask": "0x80", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "dsa": { 00:04:54.965 "mask": "0x200", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "thread": { 00:04:54.965 "mask": "0x400", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "nvme_pcie": { 00:04:54.965 "mask": "0x800", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "iaa": { 00:04:54.965 "mask": "0x1000", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "nvme_tcp": { 00:04:54.965 "mask": "0x2000", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "bdev_nvme": { 00:04:54.965 "mask": "0x4000", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 }, 00:04:54.965 "sock": { 00:04:54.965 "mask": "0x8000", 00:04:54.965 "tpoint_mask": "0x0" 00:04:54.965 } 00:04:54.965 }' 00:04:54.965 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:55.222 00:04:55.222 real 0m0.198s 00:04:55.222 user 0m0.179s 00:04:55.222 sys 0m0.012s 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.222 15:16:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 ************************************ 00:04:55.222 END TEST rpc_trace_cmd_test 00:04:55.222 ************************************ 00:04:55.222 15:16:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:55.222 15:16:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:55.222 15:16:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:55.222 15:16:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:55.222 15:16:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.222 15:16:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.222 15:16:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 ************************************ 00:04:55.222 START TEST rpc_daemon_integrity 00:04:55.222 ************************************ 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.222 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 15:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.481 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.481 { 00:04:55.481 "name": "Malloc2", 00:04:55.481 "aliases": [ 00:04:55.481 "15ca7254-13c9-4b38-9b3c-b48409b37514" 00:04:55.481 ], 00:04:55.481 "product_name": "Malloc disk", 00:04:55.481 "block_size": 512, 00:04:55.481 "num_blocks": 16384, 00:04:55.481 "uuid": "15ca7254-13c9-4b38-9b3c-b48409b37514", 00:04:55.481 "assigned_rate_limits": { 00:04:55.481 "rw_ios_per_sec": 0, 00:04:55.481 "rw_mbytes_per_sec": 0, 00:04:55.481 "r_mbytes_per_sec": 0, 00:04:55.481 "w_mbytes_per_sec": 0 00:04:55.481 }, 00:04:55.481 "claimed": false, 00:04:55.481 "zoned": false, 00:04:55.481 "supported_io_types": { 00:04:55.481 "read": true, 00:04:55.481 "write": true, 00:04:55.481 "unmap": true, 00:04:55.481 "flush": true, 00:04:55.481 "reset": true, 00:04:55.481 "nvme_admin": false, 00:04:55.481 "nvme_io": false, 00:04:55.481 "nvme_io_md": false, 00:04:55.481 "write_zeroes": true, 00:04:55.481 "zcopy": true, 00:04:55.481 "get_zone_info": false, 00:04:55.481 "zone_management": false, 00:04:55.481 "zone_append": false, 00:04:55.481 "compare": false, 00:04:55.481 "compare_and_write": false, 00:04:55.481 "abort": true, 00:04:55.481 "seek_hole": false, 00:04:55.481 "seek_data": false, 00:04:55.481 "copy": true, 00:04:55.481 "nvme_iov_md": false 00:04:55.481 }, 00:04:55.481 "memory_domains": [ 00:04:55.481 { 00:04:55.481 "dma_device_id": "system", 00:04:55.481 "dma_device_type": 1 00:04:55.481 }, 00:04:55.481 { 00:04:55.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.481 "dma_device_type": 2 00:04:55.481 } 00:04:55.481 ], 00:04:55.481 "driver_specific": {} 00:04:55.481 } 00:04:55.481 ]' 00:04:55.481 15:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 [2024-07-13 15:16:26.029775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:55.481 [2024-07-13 15:16:26.029818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.481 [2024-07-13 15:16:26.029843] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a56490 00:04:55.481 [2024-07-13 15:16:26.029873] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.481 [2024-07-13 15:16:26.031211] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.481 [2024-07-13 15:16:26.031252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.481 Passthru0 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.481 { 00:04:55.481 "name": "Malloc2", 00:04:55.481 "aliases": [ 00:04:55.481 "15ca7254-13c9-4b38-9b3c-b48409b37514" 00:04:55.481 ], 00:04:55.481 "product_name": "Malloc disk", 00:04:55.481 "block_size": 512, 00:04:55.481 "num_blocks": 16384, 00:04:55.481 "uuid": "15ca7254-13c9-4b38-9b3c-b48409b37514", 00:04:55.481 "assigned_rate_limits": { 00:04:55.481 "rw_ios_per_sec": 0, 00:04:55.481 "rw_mbytes_per_sec": 0, 00:04:55.481 "r_mbytes_per_sec": 0, 00:04:55.481 "w_mbytes_per_sec": 0 00:04:55.481 }, 00:04:55.481 "claimed": true, 00:04:55.481 "claim_type": "exclusive_write", 00:04:55.481 "zoned": false, 00:04:55.481 "supported_io_types": { 00:04:55.481 "read": true, 00:04:55.481 "write": true, 00:04:55.481 "unmap": true, 00:04:55.481 "flush": true, 00:04:55.481 "reset": true, 00:04:55.481 "nvme_admin": false, 00:04:55.481 "nvme_io": false, 00:04:55.481 "nvme_io_md": false, 00:04:55.481 "write_zeroes": true, 00:04:55.481 "zcopy": true, 00:04:55.481 "get_zone_info": false, 00:04:55.481 "zone_management": false, 00:04:55.481 "zone_append": false, 00:04:55.481 "compare": false, 00:04:55.481 "compare_and_write": false, 00:04:55.481 "abort": true, 00:04:55.481 "seek_hole": false, 00:04:55.481 "seek_data": false, 00:04:55.481 "copy": true, 00:04:55.481 "nvme_iov_md": false 00:04:55.481 }, 00:04:55.481 "memory_domains": [ 00:04:55.481 { 00:04:55.481 "dma_device_id": "system", 00:04:55.481 "dma_device_type": 1 00:04:55.481 }, 00:04:55.481 { 00:04:55.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.481 "dma_device_type": 2 00:04:55.481 } 00:04:55.481 ], 00:04:55.481 "driver_specific": {} 00:04:55.481 }, 00:04:55.481 { 00:04:55.481 "name": "Passthru0", 00:04:55.481 "aliases": [ 00:04:55.481 "2a7c16bf-8b4a-5466-8a81-173e7aaac148" 00:04:55.481 ], 00:04:55.481 "product_name": "passthru", 00:04:55.481 "block_size": 512, 00:04:55.481 "num_blocks": 16384, 00:04:55.481 "uuid": "2a7c16bf-8b4a-5466-8a81-173e7aaac148", 00:04:55.481 "assigned_rate_limits": { 00:04:55.481 "rw_ios_per_sec": 0, 00:04:55.481 "rw_mbytes_per_sec": 0, 00:04:55.481 "r_mbytes_per_sec": 0, 00:04:55.481 "w_mbytes_per_sec": 0 00:04:55.481 }, 00:04:55.481 "claimed": false, 00:04:55.481 "zoned": false, 00:04:55.481 "supported_io_types": { 00:04:55.481 "read": true, 00:04:55.481 "write": true, 00:04:55.481 "unmap": true, 00:04:55.481 "flush": true, 00:04:55.481 "reset": true, 00:04:55.481 "nvme_admin": false, 00:04:55.481 "nvme_io": false, 00:04:55.481 "nvme_io_md": false, 00:04:55.481 "write_zeroes": true, 00:04:55.481 "zcopy": true, 00:04:55.481 "get_zone_info": false, 00:04:55.481 "zone_management": false, 00:04:55.481 "zone_append": false, 00:04:55.481 "compare": false, 00:04:55.481 "compare_and_write": false, 00:04:55.481 "abort": true, 00:04:55.481 "seek_hole": false, 00:04:55.481 "seek_data": false, 00:04:55.481 "copy": true, 00:04:55.481 "nvme_iov_md": false 00:04:55.481 }, 00:04:55.481 "memory_domains": [ 00:04:55.481 { 00:04:55.481 "dma_device_id": "system", 00:04:55.481 "dma_device_type": 1 00:04:55.481 }, 00:04:55.481 { 00:04:55.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.481 "dma_device_type": 2 00:04:55.481 } 00:04:55.481 ], 00:04:55.481 "driver_specific": { 00:04:55.481 "passthru": { 00:04:55.481 "name": "Passthru0", 00:04:55.481 "base_bdev_name": "Malloc2" 00:04:55.481 } 00:04:55.481 } 00:04:55.481 } 00:04:55.481 ]' 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.481 00:04:55.481 real 0m0.218s 00:04:55.481 user 0m0.146s 00:04:55.481 sys 0m0.020s 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.481 15:16:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.481 ************************************ 00:04:55.481 END TEST rpc_daemon_integrity 00:04:55.481 ************************************ 00:04:55.481 15:16:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:55.481 15:16:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:55.481 15:16:26 rpc -- rpc/rpc.sh@84 -- # killprocess 970609 00:04:55.481 15:16:26 rpc -- common/autotest_common.sh@948 -- # '[' -z 970609 ']' 00:04:55.481 15:16:26 rpc -- common/autotest_common.sh@952 -- # kill -0 970609 00:04:55.481 15:16:26 rpc -- common/autotest_common.sh@953 -- # uname 00:04:55.481 15:16:26 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.482 15:16:26 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 970609 00:04:55.482 15:16:26 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.482 15:16:26 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.482 15:16:26 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 970609' 00:04:55.482 killing process with pid 970609 00:04:55.482 15:16:26 rpc -- common/autotest_common.sh@967 -- # kill 970609 00:04:55.482 15:16:26 rpc -- common/autotest_common.sh@972 -- # wait 970609 00:04:56.046 00:04:56.046 real 0m1.877s 00:04:56.046 user 0m2.375s 00:04:56.046 sys 0m0.570s 00:04:56.046 15:16:26 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.046 15:16:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 ************************************ 00:04:56.046 END TEST rpc 00:04:56.046 ************************************ 00:04:56.046 15:16:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.046 15:16:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:56.046 15:16:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.046 15:16:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.046 15:16:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 ************************************ 00:04:56.046 START TEST skip_rpc 00:04:56.046 ************************************ 00:04:56.046 15:16:26 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:56.046 * Looking for test storage... 00:04:56.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.046 15:16:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.046 15:16:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:56.046 15:16:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:56.046 15:16:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.046 15:16:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.046 15:16:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 ************************************ 00:04:56.046 START TEST skip_rpc 00:04:56.046 ************************************ 00:04:56.046 15:16:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:56.046 15:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=970969 00:04:56.046 15:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.046 15:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:56.046 15:16:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:56.046 [2024-07-13 15:16:26.781122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:04:56.046 [2024-07-13 15:16:26.781219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970969 ] 00:04:56.046 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.303 [2024-07-13 15:16:26.812175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.303 [2024-07-13 15:16:26.839932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.303 [2024-07-13 15:16:26.925225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 970969 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 970969 ']' 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 970969 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.560 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 970969 00:05:01.561 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.561 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.561 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 970969' 00:05:01.561 killing process with pid 970969 00:05:01.561 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 970969 00:05:01.561 15:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 970969 00:05:01.561 00:05:01.561 real 0m5.443s 00:05:01.561 user 0m5.139s 00:05:01.561 sys 0m0.310s 00:05:01.561 15:16:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.561 15:16:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.561 ************************************ 00:05:01.561 END TEST skip_rpc 00:05:01.561 ************************************ 00:05:01.561 15:16:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.561 15:16:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:01.561 15:16:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.561 15:16:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.561 15:16:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.561 ************************************ 00:05:01.561 START TEST skip_rpc_with_json 00:05:01.561 ************************************ 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=971660 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 971660 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 971660 ']' 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.561 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.561 [2024-07-13 15:16:32.280656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:01.561 [2024-07-13 15:16:32.280744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971660 ] 00:05:01.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.561 [2024-07-13 15:16:32.312694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:01.819 [2024-07-13 15:16:32.340658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.819 [2024-07-13 15:16:32.427352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.076 [2024-07-13 15:16:32.687176] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:02.076 request: 00:05:02.076 { 00:05:02.076 "trtype": "tcp", 00:05:02.076 "method": "nvmf_get_transports", 00:05:02.076 "req_id": 1 00:05:02.076 } 00:05:02.076 Got JSON-RPC error response 00:05:02.076 response: 00:05:02.076 { 00:05:02.076 "code": -19, 00:05:02.076 "message": "No such device" 00:05:02.076 } 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.076 [2024-07-13 15:16:32.695311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.076 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.333 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.333 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.333 { 00:05:02.333 "subsystems": [ 00:05:02.333 { 00:05:02.333 "subsystem": "vfio_user_target", 00:05:02.333 "config": null 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "subsystem": "keyring", 00:05:02.333 "config": [] 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "subsystem": "iobuf", 00:05:02.333 "config": [ 00:05:02.333 { 00:05:02.333 "method": "iobuf_set_options", 00:05:02.333 "params": { 00:05:02.333 "small_pool_count": 8192, 00:05:02.333 "large_pool_count": 1024, 00:05:02.333 "small_bufsize": 8192, 00:05:02.333 "large_bufsize": 135168 00:05:02.333 } 00:05:02.333 } 00:05:02.333 ] 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "subsystem": "sock", 00:05:02.333 "config": [ 00:05:02.333 { 00:05:02.333 "method": "sock_set_default_impl", 00:05:02.333 "params": { 00:05:02.333 "impl_name": "posix" 00:05:02.333 } 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "method": "sock_impl_set_options", 00:05:02.333 "params": { 00:05:02.333 "impl_name": "ssl", 00:05:02.333 "recv_buf_size": 4096, 00:05:02.333 "send_buf_size": 4096, 00:05:02.333 "enable_recv_pipe": true, 00:05:02.333 "enable_quickack": false, 00:05:02.333 "enable_placement_id": 0, 00:05:02.333 "enable_zerocopy_send_server": true, 00:05:02.333 "enable_zerocopy_send_client": false, 00:05:02.333 "zerocopy_threshold": 0, 00:05:02.333 "tls_version": 0, 00:05:02.333 "enable_ktls": false 00:05:02.333 } 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "method": "sock_impl_set_options", 00:05:02.333 "params": { 00:05:02.333 "impl_name": "posix", 00:05:02.333 "recv_buf_size": 2097152, 00:05:02.333 "send_buf_size": 2097152, 00:05:02.333 "enable_recv_pipe": true, 00:05:02.333 "enable_quickack": false, 00:05:02.333 "enable_placement_id": 0, 00:05:02.333 "enable_zerocopy_send_server": true, 00:05:02.333 "enable_zerocopy_send_client": false, 00:05:02.333 "zerocopy_threshold": 0, 00:05:02.333 "tls_version": 0, 00:05:02.333 "enable_ktls": false 00:05:02.333 } 00:05:02.333 } 00:05:02.333 ] 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "subsystem": "vmd", 00:05:02.333 "config": [] 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "subsystem": "accel", 00:05:02.333 "config": [ 00:05:02.333 { 00:05:02.333 "method": "accel_set_options", 00:05:02.333 "params": { 00:05:02.333 "small_cache_size": 128, 00:05:02.333 "large_cache_size": 16, 00:05:02.333 "task_count": 2048, 00:05:02.333 "sequence_count": 2048, 00:05:02.333 "buf_count": 2048 00:05:02.333 } 00:05:02.333 } 00:05:02.333 ] 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "subsystem": "bdev", 00:05:02.333 "config": [ 00:05:02.333 { 00:05:02.333 "method": "bdev_set_options", 00:05:02.333 "params": { 00:05:02.333 "bdev_io_pool_size": 65535, 00:05:02.333 "bdev_io_cache_size": 256, 00:05:02.333 "bdev_auto_examine": true, 00:05:02.333 "iobuf_small_cache_size": 128, 00:05:02.333 "iobuf_large_cache_size": 16 00:05:02.333 } 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "method": "bdev_raid_set_options", 00:05:02.333 "params": { 00:05:02.333 "process_window_size_kb": 1024 00:05:02.333 } 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "method": "bdev_iscsi_set_options", 00:05:02.333 "params": { 00:05:02.333 "timeout_sec": 30 00:05:02.333 } 00:05:02.333 }, 00:05:02.333 { 00:05:02.333 "method": "bdev_nvme_set_options", 00:05:02.333 "params": { 00:05:02.333 "action_on_timeout": "none", 00:05:02.333 "timeout_us": 0, 00:05:02.333 "timeout_admin_us": 0, 00:05:02.333 "keep_alive_timeout_ms": 10000, 00:05:02.333 "arbitration_burst": 0, 00:05:02.333 "low_priority_weight": 0, 00:05:02.333 "medium_priority_weight": 0, 00:05:02.333 "high_priority_weight": 0, 00:05:02.333 "nvme_adminq_poll_period_us": 10000, 00:05:02.333 "nvme_ioq_poll_period_us": 0, 00:05:02.333 "io_queue_requests": 0, 00:05:02.333 "delay_cmd_submit": true, 00:05:02.333 "transport_retry_count": 4, 00:05:02.333 "bdev_retry_count": 3, 00:05:02.333 "transport_ack_timeout": 0, 00:05:02.333 "ctrlr_loss_timeout_sec": 0, 00:05:02.333 "reconnect_delay_sec": 0, 00:05:02.333 "fast_io_fail_timeout_sec": 0, 00:05:02.334 "disable_auto_failback": false, 00:05:02.334 "generate_uuids": false, 00:05:02.334 "transport_tos": 0, 00:05:02.334 "nvme_error_stat": false, 00:05:02.334 "rdma_srq_size": 0, 00:05:02.334 "io_path_stat": false, 00:05:02.334 "allow_accel_sequence": false, 00:05:02.334 "rdma_max_cq_size": 0, 00:05:02.334 "rdma_cm_event_timeout_ms": 0, 00:05:02.334 "dhchap_digests": [ 00:05:02.334 "sha256", 00:05:02.334 "sha384", 00:05:02.334 "sha512" 00:05:02.334 ], 00:05:02.334 "dhchap_dhgroups": [ 00:05:02.334 "null", 00:05:02.334 "ffdhe2048", 00:05:02.334 "ffdhe3072", 00:05:02.334 "ffdhe4096", 00:05:02.334 "ffdhe6144", 00:05:02.334 "ffdhe8192" 00:05:02.334 ] 00:05:02.334 } 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "method": "bdev_nvme_set_hotplug", 00:05:02.334 "params": { 00:05:02.334 "period_us": 100000, 00:05:02.334 "enable": false 00:05:02.334 } 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "method": "bdev_wait_for_examine" 00:05:02.334 } 00:05:02.334 ] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "scsi", 00:05:02.334 "config": null 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "scheduler", 00:05:02.334 "config": [ 00:05:02.334 { 00:05:02.334 "method": "framework_set_scheduler", 00:05:02.334 "params": { 00:05:02.334 "name": "static" 00:05:02.334 } 00:05:02.334 } 00:05:02.334 ] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "vhost_scsi", 00:05:02.334 "config": [] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "vhost_blk", 00:05:02.334 "config": [] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "ublk", 00:05:02.334 "config": [] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "nbd", 00:05:02.334 "config": [] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "nvmf", 00:05:02.334 "config": [ 00:05:02.334 { 00:05:02.334 "method": "nvmf_set_config", 00:05:02.334 "params": { 00:05:02.334 "discovery_filter": "match_any", 00:05:02.334 "admin_cmd_passthru": { 00:05:02.334 "identify_ctrlr": false 00:05:02.334 } 00:05:02.334 } 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "method": "nvmf_set_max_subsystems", 00:05:02.334 "params": { 00:05:02.334 "max_subsystems": 1024 00:05:02.334 } 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "method": "nvmf_set_crdt", 00:05:02.334 "params": { 00:05:02.334 "crdt1": 0, 00:05:02.334 "crdt2": 0, 00:05:02.334 "crdt3": 0 00:05:02.334 } 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "method": "nvmf_create_transport", 00:05:02.334 "params": { 00:05:02.334 "trtype": "TCP", 00:05:02.334 "max_queue_depth": 128, 00:05:02.334 "max_io_qpairs_per_ctrlr": 127, 00:05:02.334 "in_capsule_data_size": 4096, 00:05:02.334 "max_io_size": 131072, 00:05:02.334 "io_unit_size": 131072, 00:05:02.334 "max_aq_depth": 128, 00:05:02.334 "num_shared_buffers": 511, 00:05:02.334 "buf_cache_size": 4294967295, 00:05:02.334 "dif_insert_or_strip": false, 00:05:02.334 "zcopy": false, 00:05:02.334 "c2h_success": true, 00:05:02.334 "sock_priority": 0, 00:05:02.334 "abort_timeout_sec": 1, 00:05:02.334 "ack_timeout": 0, 00:05:02.334 "data_wr_pool_size": 0 00:05:02.334 } 00:05:02.334 } 00:05:02.334 ] 00:05:02.334 }, 00:05:02.334 { 00:05:02.334 "subsystem": "iscsi", 00:05:02.334 "config": [ 00:05:02.334 { 00:05:02.334 "method": "iscsi_set_options", 00:05:02.334 "params": { 00:05:02.334 "node_base": "iqn.2016-06.io.spdk", 00:05:02.334 "max_sessions": 128, 00:05:02.334 "max_connections_per_session": 2, 00:05:02.334 "max_queue_depth": 64, 00:05:02.334 "default_time2wait": 2, 00:05:02.334 "default_time2retain": 20, 00:05:02.334 "first_burst_length": 8192, 00:05:02.334 "immediate_data": true, 00:05:02.334 "allow_duplicated_isid": false, 00:05:02.334 "error_recovery_level": 0, 00:05:02.334 "nop_timeout": 60, 00:05:02.334 "nop_in_interval": 30, 00:05:02.334 "disable_chap": false, 00:05:02.334 "require_chap": false, 00:05:02.334 "mutual_chap": false, 00:05:02.334 "chap_group": 0, 00:05:02.334 "max_large_datain_per_connection": 64, 00:05:02.334 "max_r2t_per_connection": 4, 00:05:02.334 "pdu_pool_size": 36864, 00:05:02.334 "immediate_data_pool_size": 16384, 00:05:02.334 "data_out_pool_size": 2048 00:05:02.334 } 00:05:02.334 } 00:05:02.334 ] 00:05:02.334 } 00:05:02.334 ] 00:05:02.334 } 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 971660 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 971660 ']' 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 971660 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 971660 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 971660' 00:05:02.334 killing process with pid 971660 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 971660 00:05:02.334 15:16:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 971660 00:05:02.590 15:16:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=971802 00:05:02.590 15:16:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.590 15:16:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 971802 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 971802 ']' 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 971802 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 971802 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 971802' 00:05:07.846 killing process with pid 971802 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 971802 00:05:07.846 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 971802 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.105 00:05:08.105 real 0m6.482s 00:05:08.105 user 0m6.075s 00:05:08.105 sys 0m0.683s 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.105 ************************************ 00:05:08.105 END TEST skip_rpc_with_json 00:05:08.105 ************************************ 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.105 15:16:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.105 ************************************ 00:05:08.105 START TEST skip_rpc_with_delay 00:05:08.105 ************************************ 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.105 [2024-07-13 15:16:38.812045] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:08.105 [2024-07-13 15:16:38.812162] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:08.105 00:05:08.105 real 0m0.073s 00:05:08.105 user 0m0.044s 00:05:08.105 sys 0m0.027s 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.105 15:16:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:08.105 ************************************ 00:05:08.105 END TEST skip_rpc_with_delay 00:05:08.105 ************************************ 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:08.105 15:16:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:08.105 15:16:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:08.105 15:16:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.105 15:16:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.364 ************************************ 00:05:08.364 START TEST exit_on_failed_rpc_init 00:05:08.364 ************************************ 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=972517 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 972517 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 972517 ']' 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.364 15:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.364 [2024-07-13 15:16:38.930988] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:08.364 [2024-07-13 15:16:38.931063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972517 ] 00:05:08.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.364 [2024-07-13 15:16:38.962426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:08.364 [2024-07-13 15:16:38.991130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.364 [2024-07-13 15:16:39.081309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:08.623 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.881 [2024-07-13 15:16:39.396872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:08.881 [2024-07-13 15:16:39.396946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972533 ] 00:05:08.881 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.881 [2024-07-13 15:16:39.427045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:08.881 [2024-07-13 15:16:39.459544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.881 [2024-07-13 15:16:39.552235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.881 [2024-07-13 15:16:39.552343] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:08.881 [2024-07-13 15:16:39.552365] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:08.881 [2024-07-13 15:16:39.552378] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.881 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:08.881 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.881 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 972517 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 972517 ']' 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 972517 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972517 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972517' 00:05:09.139 killing process with pid 972517 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 972517 00:05:09.139 15:16:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 972517 00:05:09.397 00:05:09.397 real 0m1.198s 00:05:09.397 user 0m1.281s 00:05:09.397 sys 0m0.470s 00:05:09.397 15:16:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.397 15:16:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.397 ************************************ 00:05:09.397 END TEST exit_on_failed_rpc_init 00:05:09.397 ************************************ 00:05:09.397 15:16:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.397 15:16:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:09.397 00:05:09.397 real 0m13.444s 00:05:09.397 user 0m12.638s 00:05:09.397 sys 0m1.657s 00:05:09.397 15:16:40 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.397 15:16:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.397 ************************************ 00:05:09.397 END TEST skip_rpc 00:05:09.397 ************************************ 00:05:09.397 15:16:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.397 15:16:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:09.397 15:16:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.397 15:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.397 15:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.397 ************************************ 00:05:09.397 START TEST rpc_client 00:05:09.397 ************************************ 00:05:09.397 15:16:40 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:09.656 * Looking for test storage... 00:05:09.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:09.656 15:16:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:09.656 OK 00:05:09.656 15:16:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:09.656 00:05:09.656 real 0m0.069s 00:05:09.656 user 0m0.025s 00:05:09.656 sys 0m0.048s 00:05:09.656 15:16:40 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.656 15:16:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:09.656 ************************************ 00:05:09.656 END TEST rpc_client 00:05:09.656 ************************************ 00:05:09.656 15:16:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.656 15:16:40 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:09.656 15:16:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.656 15:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.656 15:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.656 ************************************ 00:05:09.656 START TEST json_config 00:05:09.656 ************************************ 00:05:09.656 15:16:40 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:09.656 15:16:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.656 15:16:40 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.657 15:16:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.657 15:16:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.657 15:16:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.657 15:16:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.657 15:16:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.657 15:16:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.657 15:16:40 json_config -- paths/export.sh@5 -- # export PATH 00:05:09.657 15:16:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@47 -- # : 0 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.657 15:16:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:09.657 INFO: JSON configuration test init 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.657 15:16:40 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:09.657 15:16:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.657 15:16:40 json_config -- json_config/common.sh@10 -- # shift 00:05:09.657 15:16:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.657 15:16:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.657 15:16:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.657 15:16:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.657 15:16:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.657 15:16:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=972775 00:05:09.657 15:16:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:09.657 15:16:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.657 Waiting for target to run... 00:05:09.657 15:16:40 json_config -- json_config/common.sh@25 -- # waitforlisten 972775 /var/tmp/spdk_tgt.sock 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@829 -- # '[' -z 972775 ']' 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.657 15:16:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.657 [2024-07-13 15:16:40.381642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:09.657 [2024-07-13 15:16:40.381740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972775 ] 00:05:09.657 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.224 [2024-07-13 15:16:40.710423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:10.224 [2024-07-13 15:16:40.744456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.224 [2024-07-13 15:16:40.807542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.789 15:16:41 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.789 15:16:41 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:10.789 15:16:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.789 00:05:10.789 15:16:41 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:10.789 15:16:41 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:10.789 15:16:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.789 15:16:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.789 15:16:41 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:10.789 15:16:41 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:10.789 15:16:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.789 15:16:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.789 15:16:41 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:10.789 15:16:41 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:10.789 15:16:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:14.067 15:16:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.067 15:16:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:14.067 15:16:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:14.067 15:16:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.067 15:16:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:14.067 15:16:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.067 15:16:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:14.067 15:16:44 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.067 15:16:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.325 MallocForNvmf0 00:05:14.325 15:16:45 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.325 15:16:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.590 MallocForNvmf1 00:05:14.590 15:16:45 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.590 15:16:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.904 [2024-07-13 15:16:45.514312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.904 15:16:45 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.904 15:16:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.163 15:16:45 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.163 15:16:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.421 15:16:46 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.421 15:16:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.679 15:16:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.679 15:16:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.938 [2024-07-13 15:16:46.501483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:15.938 15:16:46 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:15.938 15:16:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.938 15:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.938 15:16:46 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:15.938 15:16:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.938 15:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.938 15:16:46 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:15.938 15:16:46 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.938 15:16:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.196 MallocBdevForConfigChangeCheck 00:05:16.196 15:16:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:16.196 15:16:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.196 15:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.196 15:16:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:16.196 15:16:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.454 15:16:47 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:16.454 INFO: shutting down applications... 00:05:16.454 15:16:47 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:16.454 15:16:47 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:16.454 15:16:47 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:16.454 15:16:47 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:18.355 Calling clear_iscsi_subsystem 00:05:18.355 Calling clear_nvmf_subsystem 00:05:18.355 Calling clear_nbd_subsystem 00:05:18.355 Calling clear_ublk_subsystem 00:05:18.355 Calling clear_vhost_blk_subsystem 00:05:18.355 Calling clear_vhost_scsi_subsystem 00:05:18.355 Calling clear_bdev_subsystem 00:05:18.355 15:16:48 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:18.355 15:16:48 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:18.355 15:16:48 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:18.355 15:16:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.355 15:16:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:18.355 15:16:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:18.612 15:16:49 json_config -- json_config/json_config.sh@345 -- # break 00:05:18.612 15:16:49 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:18.612 15:16:49 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:18.612 15:16:49 json_config -- json_config/common.sh@31 -- # local app=target 00:05:18.612 15:16:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.612 15:16:49 json_config -- json_config/common.sh@35 -- # [[ -n 972775 ]] 00:05:18.612 15:16:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 972775 00:05:18.612 15:16:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.612 15:16:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.612 15:16:49 json_config -- json_config/common.sh@41 -- # kill -0 972775 00:05:18.612 15:16:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.180 15:16:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.180 15:16:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.180 15:16:49 json_config -- json_config/common.sh@41 -- # kill -0 972775 00:05:19.180 15:16:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.180 15:16:49 json_config -- json_config/common.sh@43 -- # break 00:05:19.180 15:16:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.180 15:16:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.180 SPDK target shutdown done 00:05:19.180 15:16:49 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:19.180 INFO: relaunching applications... 00:05:19.180 15:16:49 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.180 15:16:49 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.180 15:16:49 json_config -- json_config/common.sh@10 -- # shift 00:05:19.180 15:16:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.180 15:16:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.180 15:16:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.180 15:16:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.180 15:16:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.180 15:16:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=974035 00:05:19.180 15:16:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.180 15:16:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.180 Waiting for target to run... 00:05:19.180 15:16:49 json_config -- json_config/common.sh@25 -- # waitforlisten 974035 /var/tmp/spdk_tgt.sock 00:05:19.180 15:16:49 json_config -- common/autotest_common.sh@829 -- # '[' -z 974035 ']' 00:05:19.180 15:16:49 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.180 15:16:49 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.180 15:16:49 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.180 15:16:49 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.180 15:16:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.180 [2024-07-13 15:16:49.762756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:19.180 [2024-07-13 15:16:49.762858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974035 ] 00:05:19.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.438 [2024-07-13 15:16:50.093809] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.438 [2024-07-13 15:16:50.126201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.438 [2024-07-13 15:16:50.189240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.719 [2024-07-13 15:16:53.221799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.719 [2024-07-13 15:16:53.254271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.719 15:16:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.719 15:16:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:22.719 15:16:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.719 00:05:22.719 15:16:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:22.719 15:16:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.719 INFO: Checking if target configuration is the same... 00:05:22.719 15:16:53 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.719 15:16:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:22.719 15:16:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.719 + '[' 2 -ne 2 ']' 00:05:22.719 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.719 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:22.719 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:22.719 +++ basename /dev/fd/62 00:05:22.719 ++ mktemp /tmp/62.XXX 00:05:22.719 + tmp_file_1=/tmp/62.5XU 00:05:22.719 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.719 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.719 + tmp_file_2=/tmp/spdk_tgt_config.json.n1a 00:05:22.719 + ret=0 00:05:22.719 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.976 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.976 + diff -u /tmp/62.5XU /tmp/spdk_tgt_config.json.n1a 00:05:22.976 + echo 'INFO: JSON config files are the same' 00:05:22.976 INFO: JSON config files are the same 00:05:22.976 + rm /tmp/62.5XU /tmp/spdk_tgt_config.json.n1a 00:05:22.976 + exit 0 00:05:22.976 15:16:53 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:22.976 15:16:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.976 INFO: changing configuration and checking if this can be detected... 00:05:22.976 15:16:53 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.976 15:16:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.233 15:16:53 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.233 15:16:53 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:23.233 15:16:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.233 + '[' 2 -ne 2 ']' 00:05:23.233 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.233 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.233 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.233 +++ basename /dev/fd/62 00:05:23.233 ++ mktemp /tmp/62.XXX 00:05:23.233 + tmp_file_1=/tmp/62.2Y9 00:05:23.233 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.233 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.233 + tmp_file_2=/tmp/spdk_tgt_config.json.mF3 00:05:23.233 + ret=0 00:05:23.233 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.797 + diff -u /tmp/62.2Y9 /tmp/spdk_tgt_config.json.mF3 00:05:23.797 + ret=1 00:05:23.797 + echo '=== Start of file: /tmp/62.2Y9 ===' 00:05:23.797 + cat /tmp/62.2Y9 00:05:23.797 + echo '=== End of file: /tmp/62.2Y9 ===' 00:05:23.797 + echo '' 00:05:23.797 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mF3 ===' 00:05:23.797 + cat /tmp/spdk_tgt_config.json.mF3 00:05:23.797 + echo '=== End of file: /tmp/spdk_tgt_config.json.mF3 ===' 00:05:23.797 + echo '' 00:05:23.797 + rm /tmp/62.2Y9 /tmp/spdk_tgt_config.json.mF3 00:05:23.797 + exit 1 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:23.798 INFO: configuration change detected. 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@317 -- # [[ -n 974035 ]] 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.798 15:16:54 json_config -- json_config/json_config.sh@323 -- # killprocess 974035 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@948 -- # '[' -z 974035 ']' 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@952 -- # kill -0 974035 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@953 -- # uname 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 974035 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 974035' 00:05:23.798 killing process with pid 974035 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@967 -- # kill 974035 00:05:23.798 15:16:54 json_config -- common/autotest_common.sh@972 -- # wait 974035 00:05:25.693 15:16:56 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.693 15:16:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:25.693 15:16:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.693 15:16:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.693 15:16:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:25.693 15:16:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:25.693 INFO: Success 00:05:25.693 00:05:25.693 real 0m15.812s 00:05:25.693 user 0m17.762s 00:05:25.693 sys 0m1.852s 00:05:25.693 15:16:56 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.693 15:16:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.693 ************************************ 00:05:25.693 END TEST json_config 00:05:25.693 ************************************ 00:05:25.693 15:16:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.693 15:16:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:25.693 15:16:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.693 15:16:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.693 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.693 ************************************ 00:05:25.693 START TEST json_config_extra_key 00:05:25.693 ************************************ 00:05:25.693 15:16:56 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:25.693 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.693 15:16:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.694 15:16:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.694 15:16:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.694 15:16:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.694 15:16:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.694 15:16:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.694 15:16:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.694 15:16:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:25.694 15:16:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.694 15:16:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:25.694 INFO: launching applications... 00:05:25.694 15:16:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=974872 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.694 Waiting for target to run... 00:05:25.694 15:16:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 974872 /var/tmp/spdk_tgt.sock 00:05:25.694 15:16:56 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 974872 ']' 00:05:25.694 15:16:56 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.694 15:16:56 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.694 15:16:56 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.694 15:16:56 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.694 15:16:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.694 [2024-07-13 15:16:56.227546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:25.694 [2024-07-13 15:16:56.227630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974872 ] 00:05:25.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.952 [2024-07-13 15:16:56.686641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.209 [2024-07-13 15:16:56.720996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.209 [2024-07-13 15:16:56.794097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.465 15:16:57 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.465 15:16:57 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:26.465 00:05:26.465 15:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:26.465 INFO: shutting down applications... 00:05:26.465 15:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 974872 ]] 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 974872 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 974872 00:05:26.465 15:16:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 974872 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.031 15:16:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.031 SPDK target shutdown done 00:05:27.031 15:16:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.031 Success 00:05:27.031 00:05:27.031 real 0m1.591s 00:05:27.031 user 0m1.447s 00:05:27.031 sys 0m0.585s 00:05:27.031 15:16:57 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.031 15:16:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.031 ************************************ 00:05:27.031 END TEST json_config_extra_key 00:05:27.031 ************************************ 00:05:27.031 15:16:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.031 15:16:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.031 15:16:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.031 15:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.031 15:16:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.031 ************************************ 00:05:27.031 START TEST alias_rpc 00:05:27.031 ************************************ 00:05:27.031 15:16:57 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.289 * Looking for test storage... 00:05:27.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:27.289 15:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.289 15:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=975177 00:05:27.289 15:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.289 15:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 975177 00:05:27.289 15:16:57 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 975177 ']' 00:05:27.289 15:16:57 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.289 15:16:57 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.289 15:16:57 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.289 15:16:57 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.289 15:16:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.289 [2024-07-13 15:16:57.864385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:27.289 [2024-07-13 15:16:57.864470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975177 ] 00:05:27.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.289 [2024-07-13 15:16:57.895062] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.289 [2024-07-13 15:16:57.924979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.289 [2024-07-13 15:16:58.014596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.546 15:16:58 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.546 15:16:58 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.546 15:16:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:27.804 15:16:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 975177 00:05:27.804 15:16:58 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 975177 ']' 00:05:27.804 15:16:58 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 975177 00:05:27.804 15:16:58 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:27.804 15:16:58 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.804 15:16:58 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 975177 00:05:28.063 15:16:58 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.063 15:16:58 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.063 15:16:58 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 975177' 00:05:28.063 killing process with pid 975177 00:05:28.063 15:16:58 alias_rpc -- common/autotest_common.sh@967 -- # kill 975177 00:05:28.063 15:16:58 alias_rpc -- common/autotest_common.sh@972 -- # wait 975177 00:05:28.321 00:05:28.321 real 0m1.212s 00:05:28.321 user 0m1.279s 00:05:28.321 sys 0m0.427s 00:05:28.321 15:16:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.321 15:16:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.321 ************************************ 00:05:28.321 END TEST alias_rpc 00:05:28.321 ************************************ 00:05:28.321 15:16:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.321 15:16:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:28.321 15:16:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.321 15:16:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.321 15:16:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.321 15:16:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.321 ************************************ 00:05:28.321 START TEST spdkcli_tcp 00:05:28.321 ************************************ 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.321 * Looking for test storage... 00:05:28.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=975367 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.321 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 975367 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 975367 ']' 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.321 15:16:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.579 [2024-07-13 15:16:59.124619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:28.579 [2024-07-13 15:16:59.124699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975367 ] 00:05:28.579 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.579 [2024-07-13 15:16:59.157260] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.579 [2024-07-13 15:16:59.183369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.579 [2024-07-13 15:16:59.267739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.579 [2024-07-13 15:16:59.267743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.837 15:16:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.837 15:16:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:28.837 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=975382 00:05:28.837 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.837 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.095 [ 00:05:29.095 "bdev_malloc_delete", 00:05:29.095 "bdev_malloc_create", 00:05:29.096 "bdev_null_resize", 00:05:29.096 "bdev_null_delete", 00:05:29.096 "bdev_null_create", 00:05:29.096 "bdev_nvme_cuse_unregister", 00:05:29.096 "bdev_nvme_cuse_register", 00:05:29.096 "bdev_opal_new_user", 00:05:29.096 "bdev_opal_set_lock_state", 00:05:29.096 "bdev_opal_delete", 00:05:29.096 "bdev_opal_get_info", 00:05:29.096 "bdev_opal_create", 00:05:29.096 "bdev_nvme_opal_revert", 00:05:29.096 "bdev_nvme_opal_init", 00:05:29.096 "bdev_nvme_send_cmd", 00:05:29.096 "bdev_nvme_get_path_iostat", 00:05:29.096 "bdev_nvme_get_mdns_discovery_info", 00:05:29.096 "bdev_nvme_stop_mdns_discovery", 00:05:29.096 "bdev_nvme_start_mdns_discovery", 00:05:29.096 "bdev_nvme_set_multipath_policy", 00:05:29.096 "bdev_nvme_set_preferred_path", 00:05:29.096 "bdev_nvme_get_io_paths", 00:05:29.096 "bdev_nvme_remove_error_injection", 00:05:29.096 "bdev_nvme_add_error_injection", 00:05:29.096 "bdev_nvme_get_discovery_info", 00:05:29.096 "bdev_nvme_stop_discovery", 00:05:29.096 "bdev_nvme_start_discovery", 00:05:29.096 "bdev_nvme_get_controller_health_info", 00:05:29.096 "bdev_nvme_disable_controller", 00:05:29.096 "bdev_nvme_enable_controller", 00:05:29.096 "bdev_nvme_reset_controller", 00:05:29.096 "bdev_nvme_get_transport_statistics", 00:05:29.096 "bdev_nvme_apply_firmware", 00:05:29.096 "bdev_nvme_detach_controller", 00:05:29.096 "bdev_nvme_get_controllers", 00:05:29.096 "bdev_nvme_attach_controller", 00:05:29.096 "bdev_nvme_set_hotplug", 00:05:29.096 "bdev_nvme_set_options", 00:05:29.096 "bdev_passthru_delete", 00:05:29.096 "bdev_passthru_create", 00:05:29.096 "bdev_lvol_set_parent_bdev", 00:05:29.096 "bdev_lvol_set_parent", 00:05:29.096 "bdev_lvol_check_shallow_copy", 00:05:29.096 "bdev_lvol_start_shallow_copy", 00:05:29.096 "bdev_lvol_grow_lvstore", 00:05:29.096 "bdev_lvol_get_lvols", 00:05:29.096 "bdev_lvol_get_lvstores", 00:05:29.096 "bdev_lvol_delete", 00:05:29.096 "bdev_lvol_set_read_only", 00:05:29.096 "bdev_lvol_resize", 00:05:29.096 "bdev_lvol_decouple_parent", 00:05:29.096 "bdev_lvol_inflate", 00:05:29.096 "bdev_lvol_rename", 00:05:29.096 "bdev_lvol_clone_bdev", 00:05:29.096 "bdev_lvol_clone", 00:05:29.096 "bdev_lvol_snapshot", 00:05:29.096 "bdev_lvol_create", 00:05:29.096 "bdev_lvol_delete_lvstore", 00:05:29.096 "bdev_lvol_rename_lvstore", 00:05:29.096 "bdev_lvol_create_lvstore", 00:05:29.096 "bdev_raid_set_options", 00:05:29.096 "bdev_raid_remove_base_bdev", 00:05:29.096 "bdev_raid_add_base_bdev", 00:05:29.096 "bdev_raid_delete", 00:05:29.096 "bdev_raid_create", 00:05:29.096 "bdev_raid_get_bdevs", 00:05:29.096 "bdev_error_inject_error", 00:05:29.096 "bdev_error_delete", 00:05:29.096 "bdev_error_create", 00:05:29.096 "bdev_split_delete", 00:05:29.096 "bdev_split_create", 00:05:29.096 "bdev_delay_delete", 00:05:29.096 "bdev_delay_create", 00:05:29.096 "bdev_delay_update_latency", 00:05:29.096 "bdev_zone_block_delete", 00:05:29.096 "bdev_zone_block_create", 00:05:29.096 "blobfs_create", 00:05:29.096 "blobfs_detect", 00:05:29.096 "blobfs_set_cache_size", 00:05:29.096 "bdev_aio_delete", 00:05:29.096 "bdev_aio_rescan", 00:05:29.096 "bdev_aio_create", 00:05:29.096 "bdev_ftl_set_property", 00:05:29.096 "bdev_ftl_get_properties", 00:05:29.096 "bdev_ftl_get_stats", 00:05:29.096 "bdev_ftl_unmap", 00:05:29.096 "bdev_ftl_unload", 00:05:29.096 "bdev_ftl_delete", 00:05:29.096 "bdev_ftl_load", 00:05:29.096 "bdev_ftl_create", 00:05:29.096 "bdev_virtio_attach_controller", 00:05:29.096 "bdev_virtio_scsi_get_devices", 00:05:29.096 "bdev_virtio_detach_controller", 00:05:29.096 "bdev_virtio_blk_set_hotplug", 00:05:29.096 "bdev_iscsi_delete", 00:05:29.096 "bdev_iscsi_create", 00:05:29.096 "bdev_iscsi_set_options", 00:05:29.096 "accel_error_inject_error", 00:05:29.096 "ioat_scan_accel_module", 00:05:29.096 "dsa_scan_accel_module", 00:05:29.096 "iaa_scan_accel_module", 00:05:29.096 "vfu_virtio_create_scsi_endpoint", 00:05:29.096 "vfu_virtio_scsi_remove_target", 00:05:29.096 "vfu_virtio_scsi_add_target", 00:05:29.096 "vfu_virtio_create_blk_endpoint", 00:05:29.096 "vfu_virtio_delete_endpoint", 00:05:29.096 "keyring_file_remove_key", 00:05:29.096 "keyring_file_add_key", 00:05:29.096 "keyring_linux_set_options", 00:05:29.096 "iscsi_get_histogram", 00:05:29.096 "iscsi_enable_histogram", 00:05:29.096 "iscsi_set_options", 00:05:29.096 "iscsi_get_auth_groups", 00:05:29.096 "iscsi_auth_group_remove_secret", 00:05:29.096 "iscsi_auth_group_add_secret", 00:05:29.096 "iscsi_delete_auth_group", 00:05:29.096 "iscsi_create_auth_group", 00:05:29.096 "iscsi_set_discovery_auth", 00:05:29.096 "iscsi_get_options", 00:05:29.096 "iscsi_target_node_request_logout", 00:05:29.096 "iscsi_target_node_set_redirect", 00:05:29.096 "iscsi_target_node_set_auth", 00:05:29.096 "iscsi_target_node_add_lun", 00:05:29.096 "iscsi_get_stats", 00:05:29.096 "iscsi_get_connections", 00:05:29.096 "iscsi_portal_group_set_auth", 00:05:29.096 "iscsi_start_portal_group", 00:05:29.096 "iscsi_delete_portal_group", 00:05:29.096 "iscsi_create_portal_group", 00:05:29.096 "iscsi_get_portal_groups", 00:05:29.096 "iscsi_delete_target_node", 00:05:29.096 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.096 "iscsi_target_node_add_pg_ig_maps", 00:05:29.096 "iscsi_create_target_node", 00:05:29.096 "iscsi_get_target_nodes", 00:05:29.096 "iscsi_delete_initiator_group", 00:05:29.096 "iscsi_initiator_group_remove_initiators", 00:05:29.096 "iscsi_initiator_group_add_initiators", 00:05:29.096 "iscsi_create_initiator_group", 00:05:29.096 "iscsi_get_initiator_groups", 00:05:29.096 "nvmf_set_crdt", 00:05:29.096 "nvmf_set_config", 00:05:29.096 "nvmf_set_max_subsystems", 00:05:29.096 "nvmf_stop_mdns_prr", 00:05:29.096 "nvmf_publish_mdns_prr", 00:05:29.096 "nvmf_subsystem_get_listeners", 00:05:29.096 "nvmf_subsystem_get_qpairs", 00:05:29.096 "nvmf_subsystem_get_controllers", 00:05:29.096 "nvmf_get_stats", 00:05:29.096 "nvmf_get_transports", 00:05:29.096 "nvmf_create_transport", 00:05:29.096 "nvmf_get_targets", 00:05:29.096 "nvmf_delete_target", 00:05:29.096 "nvmf_create_target", 00:05:29.096 "nvmf_subsystem_allow_any_host", 00:05:29.096 "nvmf_subsystem_remove_host", 00:05:29.096 "nvmf_subsystem_add_host", 00:05:29.096 "nvmf_ns_remove_host", 00:05:29.096 "nvmf_ns_add_host", 00:05:29.096 "nvmf_subsystem_remove_ns", 00:05:29.096 "nvmf_subsystem_add_ns", 00:05:29.096 "nvmf_subsystem_listener_set_ana_state", 00:05:29.096 "nvmf_discovery_get_referrals", 00:05:29.096 "nvmf_discovery_remove_referral", 00:05:29.096 "nvmf_discovery_add_referral", 00:05:29.096 "nvmf_subsystem_remove_listener", 00:05:29.096 "nvmf_subsystem_add_listener", 00:05:29.096 "nvmf_delete_subsystem", 00:05:29.096 "nvmf_create_subsystem", 00:05:29.096 "nvmf_get_subsystems", 00:05:29.096 "env_dpdk_get_mem_stats", 00:05:29.096 "nbd_get_disks", 00:05:29.096 "nbd_stop_disk", 00:05:29.096 "nbd_start_disk", 00:05:29.096 "ublk_recover_disk", 00:05:29.096 "ublk_get_disks", 00:05:29.096 "ublk_stop_disk", 00:05:29.096 "ublk_start_disk", 00:05:29.096 "ublk_destroy_target", 00:05:29.096 "ublk_create_target", 00:05:29.096 "virtio_blk_create_transport", 00:05:29.096 "virtio_blk_get_transports", 00:05:29.096 "vhost_controller_set_coalescing", 00:05:29.096 "vhost_get_controllers", 00:05:29.096 "vhost_delete_controller", 00:05:29.096 "vhost_create_blk_controller", 00:05:29.096 "vhost_scsi_controller_remove_target", 00:05:29.096 "vhost_scsi_controller_add_target", 00:05:29.096 "vhost_start_scsi_controller", 00:05:29.096 "vhost_create_scsi_controller", 00:05:29.096 "thread_set_cpumask", 00:05:29.096 "framework_get_governor", 00:05:29.096 "framework_get_scheduler", 00:05:29.096 "framework_set_scheduler", 00:05:29.096 "framework_get_reactors", 00:05:29.096 "thread_get_io_channels", 00:05:29.096 "thread_get_pollers", 00:05:29.096 "thread_get_stats", 00:05:29.096 "framework_monitor_context_switch", 00:05:29.096 "spdk_kill_instance", 00:05:29.096 "log_enable_timestamps", 00:05:29.096 "log_get_flags", 00:05:29.096 "log_clear_flag", 00:05:29.096 "log_set_flag", 00:05:29.096 "log_get_level", 00:05:29.096 "log_set_level", 00:05:29.096 "log_get_print_level", 00:05:29.096 "log_set_print_level", 00:05:29.096 "framework_enable_cpumask_locks", 00:05:29.096 "framework_disable_cpumask_locks", 00:05:29.096 "framework_wait_init", 00:05:29.096 "framework_start_init", 00:05:29.096 "scsi_get_devices", 00:05:29.096 "bdev_get_histogram", 00:05:29.096 "bdev_enable_histogram", 00:05:29.096 "bdev_set_qos_limit", 00:05:29.096 "bdev_set_qd_sampling_period", 00:05:29.096 "bdev_get_bdevs", 00:05:29.096 "bdev_reset_iostat", 00:05:29.096 "bdev_get_iostat", 00:05:29.096 "bdev_examine", 00:05:29.096 "bdev_wait_for_examine", 00:05:29.096 "bdev_set_options", 00:05:29.096 "notify_get_notifications", 00:05:29.096 "notify_get_types", 00:05:29.096 "accel_get_stats", 00:05:29.096 "accel_set_options", 00:05:29.096 "accel_set_driver", 00:05:29.096 "accel_crypto_key_destroy", 00:05:29.096 "accel_crypto_keys_get", 00:05:29.096 "accel_crypto_key_create", 00:05:29.096 "accel_assign_opc", 00:05:29.096 "accel_get_module_info", 00:05:29.096 "accel_get_opc_assignments", 00:05:29.096 "vmd_rescan", 00:05:29.096 "vmd_remove_device", 00:05:29.096 "vmd_enable", 00:05:29.096 "sock_get_default_impl", 00:05:29.096 "sock_set_default_impl", 00:05:29.096 "sock_impl_set_options", 00:05:29.096 "sock_impl_get_options", 00:05:29.096 "iobuf_get_stats", 00:05:29.096 "iobuf_set_options", 00:05:29.096 "keyring_get_keys", 00:05:29.096 "framework_get_pci_devices", 00:05:29.096 "framework_get_config", 00:05:29.096 "framework_get_subsystems", 00:05:29.096 "vfu_tgt_set_base_path", 00:05:29.096 "trace_get_info", 00:05:29.097 "trace_get_tpoint_group_mask", 00:05:29.097 "trace_disable_tpoint_group", 00:05:29.097 "trace_enable_tpoint_group", 00:05:29.097 "trace_clear_tpoint_mask", 00:05:29.097 "trace_set_tpoint_mask", 00:05:29.097 "spdk_get_version", 00:05:29.097 "rpc_get_methods" 00:05:29.097 ] 00:05:29.097 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.097 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.097 15:16:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 975367 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 975367 ']' 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 975367 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 975367 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 975367' 00:05:29.097 killing process with pid 975367 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 975367 00:05:29.097 15:16:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 975367 00:05:29.663 00:05:29.663 real 0m1.182s 00:05:29.663 user 0m2.105s 00:05:29.663 sys 0m0.433s 00:05:29.663 15:17:00 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.663 15:17:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.663 ************************************ 00:05:29.663 END TEST spdkcli_tcp 00:05:29.663 ************************************ 00:05:29.663 15:17:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.663 15:17:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.663 15:17:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.663 15:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.663 15:17:00 -- common/autotest_common.sh@10 -- # set +x 00:05:29.663 ************************************ 00:05:29.663 START TEST dpdk_mem_utility 00:05:29.663 ************************************ 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.663 * Looking for test storage... 00:05:29.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:29.663 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:29.663 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=975569 00:05:29.663 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.663 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 975569 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 975569 ']' 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.663 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.663 [2024-07-13 15:17:00.356023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:29.663 [2024-07-13 15:17:00.356106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975569 ] 00:05:29.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.663 [2024-07-13 15:17:00.388197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.663 [2024-07-13 15:17:00.414911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.922 [2024-07-13 15:17:00.499815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.187 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.187 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:30.187 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.187 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.187 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.187 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.187 { 00:05:30.187 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.187 } 00:05:30.187 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.187 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.187 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:30.187 1 heaps totaling size 814.000000 MiB 00:05:30.187 size: 814.000000 MiB heap id: 0 00:05:30.187 end heaps---------- 00:05:30.187 8 mempools totaling size 598.116089 MiB 00:05:30.187 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.187 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.187 size: 84.521057 MiB name: bdev_io_975569 00:05:30.187 size: 51.011292 MiB name: evtpool_975569 00:05:30.187 size: 50.003479 MiB name: msgpool_975569 00:05:30.187 size: 21.763794 MiB name: PDU_Pool 00:05:30.187 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.187 size: 0.026123 MiB name: Session_Pool 00:05:30.187 end mempools------- 00:05:30.187 6 memzones totaling size 4.142822 MiB 00:05:30.187 size: 1.000366 MiB name: RG_ring_0_975569 00:05:30.187 size: 1.000366 MiB name: RG_ring_1_975569 00:05:30.187 size: 1.000366 MiB name: RG_ring_4_975569 00:05:30.187 size: 1.000366 MiB name: RG_ring_5_975569 00:05:30.187 size: 0.125366 MiB name: RG_ring_2_975569 00:05:30.187 size: 0.015991 MiB name: RG_ring_3_975569 00:05:30.187 end memzones------- 00:05:30.187 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.187 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:30.187 list of free elements. size: 12.519348 MiB 00:05:30.187 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:30.187 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:30.187 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:30.187 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:30.187 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:30.187 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:30.187 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:30.187 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:30.187 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:30.187 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:30.187 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:30.187 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:30.187 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:30.187 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:30.187 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:30.187 list of standard malloc elements. size: 199.218079 MiB 00:05:30.187 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:30.187 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:30.187 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:30.187 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:30.187 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.187 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.187 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:30.187 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.187 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:30.187 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:30.187 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:30.187 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:30.187 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:30.187 list of memzone associated elements. size: 602.262573 MiB 00:05:30.187 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:30.187 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.187 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:30.187 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.187 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:30.187 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_975569_0 00:05:30.187 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:30.187 associated memzone info: size: 48.002930 MiB name: MP_evtpool_975569_0 00:05:30.187 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:30.187 associated memzone info: size: 48.002930 MiB name: MP_msgpool_975569_0 00:05:30.187 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:30.187 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.187 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:30.187 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.187 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:30.187 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_975569 00:05:30.187 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:30.187 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_975569 00:05:30.187 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.187 associated memzone info: size: 1.007996 MiB name: MP_evtpool_975569 00:05:30.187 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:30.187 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.187 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:30.187 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.187 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:30.187 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.187 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:30.187 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.187 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:30.187 associated memzone info: size: 1.000366 MiB name: RG_ring_0_975569 00:05:30.187 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:30.187 associated memzone info: size: 1.000366 MiB name: RG_ring_1_975569 00:05:30.187 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:30.187 associated memzone info: size: 1.000366 MiB name: RG_ring_4_975569 00:05:30.187 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:30.187 associated memzone info: size: 1.000366 MiB name: RG_ring_5_975569 00:05:30.187 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:30.187 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_975569 00:05:30.187 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:30.187 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.187 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:30.188 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.188 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:30.188 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.188 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:30.188 associated memzone info: size: 0.125366 MiB name: RG_ring_2_975569 00:05:30.188 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:30.188 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.188 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:30.188 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.188 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:30.188 associated memzone info: size: 0.015991 MiB name: RG_ring_3_975569 00:05:30.188 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:30.188 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.188 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:30.188 associated memzone info: size: 0.000183 MiB name: MP_msgpool_975569 00:05:30.188 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:30.188 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_975569 00:05:30.188 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:30.188 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.188 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.188 15:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 975569 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 975569 ']' 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 975569 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 975569 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 975569' 00:05:30.188 killing process with pid 975569 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 975569 00:05:30.188 15:17:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 975569 00:05:30.809 00:05:30.809 real 0m1.036s 00:05:30.809 user 0m0.997s 00:05:30.809 sys 0m0.405s 00:05:30.809 15:17:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.809 15:17:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.809 ************************************ 00:05:30.809 END TEST dpdk_mem_utility 00:05:30.809 ************************************ 00:05:30.809 15:17:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.809 15:17:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.809 15:17:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.809 15:17:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.809 15:17:01 -- common/autotest_common.sh@10 -- # set +x 00:05:30.809 ************************************ 00:05:30.809 START TEST event 00:05:30.809 ************************************ 00:05:30.809 15:17:01 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.809 * Looking for test storage... 00:05:30.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.809 15:17:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:30.809 15:17:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.809 15:17:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.809 15:17:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:30.809 15:17:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.809 15:17:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.809 ************************************ 00:05:30.809 START TEST event_perf 00:05:30.809 ************************************ 00:05:30.809 15:17:01 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.809 Running I/O for 1 seconds...[2024-07-13 15:17:01.424653] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:30.809 [2024-07-13 15:17:01.424717] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975765 ] 00:05:30.809 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.809 [2024-07-13 15:17:01.455640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.809 [2024-07-13 15:17:01.485699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.067 [2024-07-13 15:17:01.579074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.067 [2024-07-13 15:17:01.579127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.067 [2024-07-13 15:17:01.579245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.067 [2024-07-13 15:17:01.579248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.999 Running I/O for 1 seconds... 00:05:31.999 lcore 0: 232873 00:05:31.999 lcore 1: 232873 00:05:31.999 lcore 2: 232873 00:05:31.999 lcore 3: 232872 00:05:31.999 done. 00:05:31.999 00:05:31.999 real 0m1.249s 00:05:31.999 user 0m4.164s 00:05:31.999 sys 0m0.082s 00:05:31.999 15:17:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.999 15:17:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.999 ************************************ 00:05:31.999 END TEST event_perf 00:05:31.999 ************************************ 00:05:31.999 15:17:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:31.999 15:17:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:31.999 15:17:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:31.999 15:17:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.999 15:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.999 ************************************ 00:05:31.999 START TEST event_reactor 00:05:31.999 ************************************ 00:05:31.999 15:17:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:31.999 [2024-07-13 15:17:02.729861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:31.999 [2024-07-13 15:17:02.729956] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975922 ] 00:05:31.999 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.999 [2024-07-13 15:17:02.762148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.256 [2024-07-13 15:17:02.794440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.256 [2024-07-13 15:17:02.885598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.627 test_start 00:05:33.627 oneshot 00:05:33.627 tick 100 00:05:33.627 tick 100 00:05:33.627 tick 250 00:05:33.627 tick 100 00:05:33.627 tick 100 00:05:33.627 tick 250 00:05:33.627 tick 100 00:05:33.627 tick 500 00:05:33.627 tick 100 00:05:33.627 tick 100 00:05:33.627 tick 250 00:05:33.627 tick 100 00:05:33.627 tick 100 00:05:33.627 test_end 00:05:33.627 00:05:33.627 real 0m1.250s 00:05:33.627 user 0m1.163s 00:05:33.627 sys 0m0.082s 00:05:33.627 15:17:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.627 15:17:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.627 ************************************ 00:05:33.627 END TEST event_reactor 00:05:33.627 ************************************ 00:05:33.627 15:17:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:33.627 15:17:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.627 15:17:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:33.627 15:17:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.627 15:17:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.627 ************************************ 00:05:33.627 START TEST event_reactor_perf 00:05:33.627 ************************************ 00:05:33.627 15:17:04 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.627 [2024-07-13 15:17:04.021591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:33.627 [2024-07-13 15:17:04.021655] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976078 ] 00:05:33.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.627 [2024-07-13 15:17:04.052696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.627 [2024-07-13 15:17:04.082821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.628 [2024-07-13 15:17:04.175512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.559 test_start 00:05:34.559 test_end 00:05:34.559 Performance: 357472 events per second 00:05:34.559 00:05:34.559 real 0m1.246s 00:05:34.559 user 0m1.169s 00:05:34.559 sys 0m0.073s 00:05:34.559 15:17:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.559 15:17:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.559 ************************************ 00:05:34.559 END TEST event_reactor_perf 00:05:34.559 ************************************ 00:05:34.559 15:17:05 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.559 15:17:05 event -- event/event.sh@49 -- # uname -s 00:05:34.559 15:17:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.559 15:17:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.559 15:17:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.559 15:17:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.559 15:17:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.559 ************************************ 00:05:34.559 START TEST event_scheduler 00:05:34.559 ************************************ 00:05:34.559 15:17:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.818 * Looking for test storage... 00:05:34.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:34.818 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.818 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=976257 00:05:34.818 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.818 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.818 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 976257 00:05:34.818 15:17:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 976257 ']' 00:05:34.818 15:17:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.818 15:17:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.818 15:17:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.818 15:17:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.818 15:17:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.818 [2024-07-13 15:17:05.401220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:34.818 [2024-07-13 15:17:05.401326] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976257 ] 00:05:34.818 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.818 [2024-07-13 15:17:05.437670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.818 [2024-07-13 15:17:05.464138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.818 [2024-07-13 15:17:05.553062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.818 [2024-07-13 15:17:05.553090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.818 [2024-07-13 15:17:05.553135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.818 [2024-07-13 15:17:05.553138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:35.077 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 [2024-07-13 15:17:05.625973] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:35.077 [2024-07-13 15:17:05.626000] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.077 [2024-07-13 15:17:05.626018] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.077 [2024-07-13 15:17:05.626030] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.077 [2024-07-13 15:17:05.626040] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 [2024-07-13 15:17:05.710836] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 ************************************ 00:05:35.077 START TEST scheduler_create_thread 00:05:35.077 ************************************ 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 2 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 3 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 4 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 5 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 6 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 7 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 8 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 9 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.077 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.077 10 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.078 15:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.644 15:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.644 00:05:35.644 real 0m0.591s 00:05:35.644 user 0m0.010s 00:05:35.644 sys 0m0.004s 00:05:35.644 15:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.644 15:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.644 ************************************ 00:05:35.644 END TEST scheduler_create_thread 00:05:35.644 ************************************ 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:35.644 15:17:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.644 15:17:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 976257 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 976257 ']' 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 976257 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 976257 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 976257' 00:05:35.644 killing process with pid 976257 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 976257 00:05:35.644 15:17:06 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 976257 00:05:36.210 [2024-07-13 15:17:06.811009] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.468 00:05:36.468 real 0m1.726s 00:05:36.468 user 0m2.274s 00:05:36.468 sys 0m0.323s 00:05:36.468 15:17:07 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.468 15:17:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.468 ************************************ 00:05:36.468 END TEST event_scheduler 00:05:36.468 ************************************ 00:05:36.468 15:17:07 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.468 15:17:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.468 15:17:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.468 15:17:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.468 15:17:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.468 15:17:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.468 ************************************ 00:05:36.468 START TEST app_repeat 00:05:36.468 ************************************ 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=976568 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 976568' 00:05:36.468 Process app_repeat pid: 976568 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.468 spdk_app_start Round 0 00:05:36.468 15:17:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 976568 /var/tmp/spdk-nbd.sock 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 976568 ']' 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.468 15:17:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.468 [2024-07-13 15:17:07.113119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:36.468 [2024-07-13 15:17:07.113197] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976568 ] 00:05:36.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.468 [2024-07-13 15:17:07.145639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.468 [2024-07-13 15:17:07.177572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.726 [2024-07-13 15:17:07.267888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.726 [2024-07-13 15:17:07.267893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.726 15:17:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.726 15:17:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:36.726 15:17:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.984 Malloc0 00:05:36.984 15:17:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.242 Malloc1 00:05:37.242 15:17:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.242 15:17:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.500 /dev/nbd0 00:05:37.500 15:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.500 15:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.500 1+0 records in 00:05:37.500 1+0 records out 00:05:37.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159275 s, 25.7 MB/s 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.500 15:17:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.500 15:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.500 15:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.500 15:17:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.757 /dev/nbd1 00:05:37.757 15:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.757 15:17:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.757 1+0 records in 00:05:37.757 1+0 records out 00:05:37.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211584 s, 19.4 MB/s 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.757 15:17:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.758 15:17:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.758 15:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.758 15:17:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.758 15:17:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.758 15:17:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.758 15:17:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.015 15:17:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.015 { 00:05:38.015 "nbd_device": "/dev/nbd0", 00:05:38.015 "bdev_name": "Malloc0" 00:05:38.015 }, 00:05:38.015 { 00:05:38.015 "nbd_device": "/dev/nbd1", 00:05:38.015 "bdev_name": "Malloc1" 00:05:38.015 } 00:05:38.015 ]' 00:05:38.015 15:17:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.015 { 00:05:38.015 "nbd_device": "/dev/nbd0", 00:05:38.015 "bdev_name": "Malloc0" 00:05:38.015 }, 00:05:38.015 { 00:05:38.015 "nbd_device": "/dev/nbd1", 00:05:38.015 "bdev_name": "Malloc1" 00:05:38.015 } 00:05:38.015 ]' 00:05:38.015 15:17:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.273 /dev/nbd1' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.273 /dev/nbd1' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.273 256+0 records in 00:05:38.273 256+0 records out 00:05:38.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472968 s, 222 MB/s 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.273 256+0 records in 00:05:38.273 256+0 records out 00:05:38.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200093 s, 52.4 MB/s 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.273 256+0 records in 00:05:38.273 256+0 records out 00:05:38.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022381 s, 46.9 MB/s 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.273 15:17:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.530 15:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.530 15:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.530 15:17:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.530 15:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.530 15:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.530 15:17:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.531 15:17:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.531 15:17:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.531 15:17:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.531 15:17:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.788 15:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.046 15:17:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.046 15:17:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.304 15:17:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.562 [2024-07-13 15:17:10.185021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.562 [2024-07-13 15:17:10.275493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.562 [2024-07-13 15:17:10.275497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.821 [2024-07-13 15:17:10.337785] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.821 [2024-07-13 15:17:10.337853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.346 15:17:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.346 15:17:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:42.346 spdk_app_start Round 1 00:05:42.346 15:17:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 976568 /var/tmp/spdk-nbd.sock 00:05:42.346 15:17:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 976568 ']' 00:05:42.346 15:17:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.346 15:17:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.346 15:17:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.346 15:17:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.346 15:17:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.603 15:17:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.603 15:17:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.603 15:17:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.860 Malloc0 00:05:42.860 15:17:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.117 Malloc1 00:05:43.117 15:17:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.117 15:17:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.373 /dev/nbd0 00:05:43.373 15:17:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.373 15:17:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.373 1+0 records in 00:05:43.373 1+0 records out 00:05:43.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141539 s, 28.9 MB/s 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.373 15:17:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:43.373 15:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.373 15:17:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.373 15:17:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.629 /dev/nbd1 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.629 1+0 records in 00:05:43.629 1+0 records out 00:05:43.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211487 s, 19.4 MB/s 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:43.629 15:17:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.629 15:17:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.887 { 00:05:43.887 "nbd_device": "/dev/nbd0", 00:05:43.887 "bdev_name": "Malloc0" 00:05:43.887 }, 00:05:43.887 { 00:05:43.887 "nbd_device": "/dev/nbd1", 00:05:43.887 "bdev_name": "Malloc1" 00:05:43.887 } 00:05:43.887 ]' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.887 { 00:05:43.887 "nbd_device": "/dev/nbd0", 00:05:43.887 "bdev_name": "Malloc0" 00:05:43.887 }, 00:05:43.887 { 00:05:43.887 "nbd_device": "/dev/nbd1", 00:05:43.887 "bdev_name": "Malloc1" 00:05:43.887 } 00:05:43.887 ]' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.887 /dev/nbd1' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.887 /dev/nbd1' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.887 256+0 records in 00:05:43.887 256+0 records out 00:05:43.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493972 s, 212 MB/s 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.887 256+0 records in 00:05:43.887 256+0 records out 00:05:43.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211084 s, 49.7 MB/s 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.887 15:17:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.888 256+0 records in 00:05:43.888 256+0 records out 00:05:43.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229118 s, 45.8 MB/s 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.888 15:17:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.163 15:17:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.420 15:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.678 15:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.678 15:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.678 15:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.678 15:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.935 15:17:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.935 15:17:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.192 15:17:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.192 [2024-07-13 15:17:15.947079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.450 [2024-07-13 15:17:16.037029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.450 [2024-07-13 15:17:16.037033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.450 [2024-07-13 15:17:16.099917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.450 [2024-07-13 15:17:16.100006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.974 15:17:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.974 15:17:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:47.974 spdk_app_start Round 2 00:05:47.974 15:17:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 976568 /var/tmp/spdk-nbd.sock 00:05:47.974 15:17:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 976568 ']' 00:05:47.974 15:17:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.974 15:17:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.974 15:17:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.974 15:17:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.974 15:17:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.233 15:17:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.233 15:17:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:48.233 15:17:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.491 Malloc0 00:05:48.491 15:17:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.749 Malloc1 00:05:48.749 15:17:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.749 15:17:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.007 /dev/nbd0 00:05:49.007 15:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.007 15:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.007 15:17:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.265 1+0 records in 00:05:49.265 1+0 records out 00:05:49.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193367 s, 21.2 MB/s 00:05:49.265 15:17:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.265 15:17:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:49.266 15:17:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.266 15:17:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.266 15:17:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:49.266 15:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.266 15:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.266 15:17:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.266 /dev/nbd1 00:05:49.266 15:17:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.266 15:17:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.266 15:17:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:49.266 15:17:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:49.266 15:17:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.266 15:17:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.266 15:17:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.523 1+0 records in 00:05:49.523 1+0 records out 00:05:49.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236002 s, 17.4 MB/s 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.523 15:17:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:49.523 15:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.523 15:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.523 15:17:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.523 15:17:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.523 15:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.781 { 00:05:49.781 "nbd_device": "/dev/nbd0", 00:05:49.781 "bdev_name": "Malloc0" 00:05:49.781 }, 00:05:49.781 { 00:05:49.781 "nbd_device": "/dev/nbd1", 00:05:49.781 "bdev_name": "Malloc1" 00:05:49.781 } 00:05:49.781 ]' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.781 { 00:05:49.781 "nbd_device": "/dev/nbd0", 00:05:49.781 "bdev_name": "Malloc0" 00:05:49.781 }, 00:05:49.781 { 00:05:49.781 "nbd_device": "/dev/nbd1", 00:05:49.781 "bdev_name": "Malloc1" 00:05:49.781 } 00:05:49.781 ]' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.781 /dev/nbd1' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.781 /dev/nbd1' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.781 256+0 records in 00:05:49.781 256+0 records out 00:05:49.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505762 s, 207 MB/s 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.781 256+0 records in 00:05:49.781 256+0 records out 00:05:49.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238088 s, 44.0 MB/s 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.781 256+0 records in 00:05:49.781 256+0 records out 00:05:49.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245341 s, 42.7 MB/s 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.781 15:17:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.039 15:17:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.296 15:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.554 15:17:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.554 15:17:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.812 15:17:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.070 [2024-07-13 15:17:21.755792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.327 [2024-07-13 15:17:21.846601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.327 [2024-07-13 15:17:21.846605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.327 [2024-07-13 15:17:21.908673] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.327 [2024-07-13 15:17:21.908754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.851 15:17:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 976568 /var/tmp/spdk-nbd.sock 00:05:53.851 15:17:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 976568 ']' 00:05:53.851 15:17:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.851 15:17:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.851 15:17:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.851 15:17:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.851 15:17:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.109 15:17:24 event.app_repeat -- event/event.sh@39 -- # killprocess 976568 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 976568 ']' 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 976568 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 976568 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 976568' 00:05:54.109 killing process with pid 976568 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 976568 00:05:54.109 15:17:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 976568 00:05:54.368 spdk_app_start is called in Round 0. 00:05:54.368 Shutdown signal received, stop current app iteration 00:05:54.368 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:54.368 spdk_app_start is called in Round 1. 00:05:54.368 Shutdown signal received, stop current app iteration 00:05:54.368 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:54.368 spdk_app_start is called in Round 2. 00:05:54.368 Shutdown signal received, stop current app iteration 00:05:54.368 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 reinitialization... 00:05:54.368 spdk_app_start is called in Round 3. 00:05:54.368 Shutdown signal received, stop current app iteration 00:05:54.368 15:17:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.368 15:17:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.368 00:05:54.368 real 0m17.942s 00:05:54.368 user 0m39.115s 00:05:54.368 sys 0m3.140s 00:05:54.368 15:17:25 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.368 15:17:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 ************************************ 00:05:54.368 END TEST app_repeat 00:05:54.368 ************************************ 00:05:54.368 15:17:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.368 15:17:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.368 15:17:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.368 15:17:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.368 15:17:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.368 15:17:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.368 ************************************ 00:05:54.368 START TEST cpu_locks 00:05:54.368 ************************************ 00:05:54.368 15:17:25 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.368 * Looking for test storage... 00:05:54.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.627 15:17:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.627 15:17:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.627 15:17:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.627 15:17:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.627 15:17:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.627 15:17:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.627 15:17:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.627 ************************************ 00:05:54.627 START TEST default_locks 00:05:54.627 ************************************ 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=978923 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 978923 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 978923 ']' 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.627 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.627 [2024-07-13 15:17:25.204035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:54.627 [2024-07-13 15:17:25.204119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978923 ] 00:05:54.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.627 [2024-07-13 15:17:25.236064] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.627 [2024-07-13 15:17:25.263798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.627 [2024-07-13 15:17:25.347656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.885 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.885 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:54.885 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 978923 00:05:54.885 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 978923 00:05:54.885 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.143 lslocks: write error 00:05:55.143 15:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 978923 00:05:55.144 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 978923 ']' 00:05:55.144 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 978923 00:05:55.144 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:55.144 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.144 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 978923 00:05:55.402 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.402 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.402 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 978923' 00:05:55.402 killing process with pid 978923 00:05:55.402 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 978923 00:05:55.402 15:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 978923 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 978923 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 978923 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 978923 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 978923 ']' 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (978923) - No such process 00:05:55.660 ERROR: process (pid: 978923) is no longer running 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.660 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.661 00:05:55.661 real 0m1.195s 00:05:55.661 user 0m1.115s 00:05:55.661 sys 0m0.538s 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.661 15:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 ************************************ 00:05:55.661 END TEST default_locks 00:05:55.661 ************************************ 00:05:55.661 15:17:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.661 15:17:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.661 15:17:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.661 15:17:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.661 15:17:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.661 ************************************ 00:05:55.661 START TEST default_locks_via_rpc 00:05:55.661 ************************************ 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=979085 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 979085 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 979085 ']' 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.661 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.919 [2024-07-13 15:17:26.452100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:55.919 [2024-07-13 15:17:26.452199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979085 ] 00:05:55.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.919 [2024-07-13 15:17:26.484436] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.919 [2024-07-13 15:17:26.510495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.919 [2024-07-13 15:17:26.599389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 979085 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 979085 00:05:56.177 15:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.435 15:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 979085 00:05:56.435 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 979085 ']' 00:05:56.435 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 979085 00:05:56.435 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:56.435 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.435 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 979085 00:05:56.693 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.693 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.693 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 979085' 00:05:56.693 killing process with pid 979085 00:05:56.693 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 979085 00:05:56.693 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 979085 00:05:56.952 00:05:56.952 real 0m1.238s 00:05:56.952 user 0m1.175s 00:05:56.952 sys 0m0.529s 00:05:56.952 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.952 15:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.952 ************************************ 00:05:56.952 END TEST default_locks_via_rpc 00:05:56.952 ************************************ 00:05:56.952 15:17:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.952 15:17:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.952 15:17:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.952 15:17:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.952 15:17:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.952 ************************************ 00:05:56.952 START TEST non_locking_app_on_locked_coremask 00:05:56.952 ************************************ 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=979253 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 979253 /var/tmp/spdk.sock 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 979253 ']' 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.952 15:17:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.211 [2024-07-13 15:17:27.743312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:57.211 [2024-07-13 15:17:27.743414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979253 ] 00:05:57.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.211 [2024-07-13 15:17:27.775038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.211 [2024-07-13 15:17:27.807302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.211 [2024-07-13 15:17:27.895626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=979377 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 979377 /var/tmp/spdk2.sock 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 979377 ']' 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.469 15:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.469 [2024-07-13 15:17:28.195240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:05:57.469 [2024-07-13 15:17:28.195334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979377 ] 00:05:57.469 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.469 [2024-07-13 15:17:28.230659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.728 [2024-07-13 15:17:28.289163] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.728 [2024-07-13 15:17:28.289194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.728 [2024-07-13 15:17:28.473079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.664 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.664 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.664 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 979253 00:05:58.664 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 979253 00:05:58.664 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.956 lslocks: write error 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 979253 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 979253 ']' 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 979253 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 979253 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 979253' 00:05:58.956 killing process with pid 979253 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 979253 00:05:58.956 15:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 979253 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 979377 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 979377 ']' 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 979377 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 979377 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 979377' 00:05:59.891 killing process with pid 979377 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 979377 00:05:59.891 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 979377 00:06:00.149 00:06:00.149 real 0m3.123s 00:06:00.149 user 0m3.259s 00:06:00.149 sys 0m1.011s 00:06:00.149 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.149 15:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.149 ************************************ 00:06:00.149 END TEST non_locking_app_on_locked_coremask 00:06:00.149 ************************************ 00:06:00.149 15:17:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:00.149 15:17:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.149 15:17:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.149 15:17:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.149 15:17:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.149 ************************************ 00:06:00.149 START TEST locking_app_on_unlocked_coremask 00:06:00.149 ************************************ 00:06:00.149 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:00.149 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=979689 00:06:00.149 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.149 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 979689 /var/tmp/spdk.sock 00:06:00.149 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 979689 ']' 00:06:00.149 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.150 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.150 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.150 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.150 15:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.150 [2024-07-13 15:17:30.910550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:00.150 [2024-07-13 15:17:30.910650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979689 ] 00:06:00.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.409 [2024-07-13 15:17:30.944363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.409 [2024-07-13 15:17:30.970205] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.409 [2024-07-13 15:17:30.970229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.409 [2024-07-13 15:17:31.058581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=979692 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 979692 /var/tmp/spdk2.sock 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 979692 ']' 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.668 15:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.668 [2024-07-13 15:17:31.364630] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:00.668 [2024-07-13 15:17:31.364716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979692 ] 00:06:00.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.668 [2024-07-13 15:17:31.399700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.926 [2024-07-13 15:17:31.465265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.926 [2024-07-13 15:17:31.653534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.860 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.860 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.860 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 979692 00:06:01.860 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 979692 00:06:01.860 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.118 lslocks: write error 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 979689 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 979689 ']' 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 979689 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 979689 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 979689' 00:06:02.118 killing process with pid 979689 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 979689 00:06:02.118 15:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 979689 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 979692 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 979692 ']' 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 979692 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 979692 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 979692' 00:06:03.050 killing process with pid 979692 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 979692 00:06:03.050 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 979692 00:06:03.308 00:06:03.308 real 0m3.129s 00:06:03.308 user 0m3.241s 00:06:03.308 sys 0m1.071s 00:06:03.308 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.308 15:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.308 ************************************ 00:06:03.308 END TEST locking_app_on_unlocked_coremask 00:06:03.308 ************************************ 00:06:03.308 15:17:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.308 15:17:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.308 15:17:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.308 15:17:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.308 15:17:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.308 ************************************ 00:06:03.308 START TEST locking_app_on_locked_coremask 00:06:03.308 ************************************ 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=980123 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 980123 /var/tmp/spdk.sock 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 980123 ']' 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.308 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.565 [2024-07-13 15:17:34.091807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:03.565 [2024-07-13 15:17:34.091878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980123 ] 00:06:03.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.565 [2024-07-13 15:17:34.125645] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.565 [2024-07-13 15:17:34.153924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.565 [2024-07-13 15:17:34.242407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=980126 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 980126 /var/tmp/spdk2.sock 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 980126 /var/tmp/spdk2.sock 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 980126 /var/tmp/spdk2.sock 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 980126 ']' 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.823 15:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.823 [2024-07-13 15:17:34.548117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:03.823 [2024-07-13 15:17:34.548219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980126 ] 00:06:03.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.823 [2024-07-13 15:17:34.586689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.079 [2024-07-13 15:17:34.650904] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 980123 has claimed it. 00:06:04.079 [2024-07-13 15:17:34.650951] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (980126) - No such process 00:06:04.645 ERROR: process (pid: 980126) is no longer running 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 980123 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 980123 00:06:04.645 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.209 lslocks: write error 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 980123 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 980123 ']' 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 980123 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980123 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980123' 00:06:05.209 killing process with pid 980123 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 980123 00:06:05.209 15:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 980123 00:06:05.467 00:06:05.467 real 0m2.105s 00:06:05.467 user 0m2.278s 00:06:05.467 sys 0m0.668s 00:06:05.467 15:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.467 15:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.467 ************************************ 00:06:05.467 END TEST locking_app_on_locked_coremask 00:06:05.467 ************************************ 00:06:05.467 15:17:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.467 15:17:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:05.467 15:17:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.467 15:17:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.467 15:17:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.467 ************************************ 00:06:05.467 START TEST locking_overlapped_coremask 00:06:05.467 ************************************ 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=980418 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 980418 /var/tmp/spdk.sock 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 980418 ']' 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.467 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.724 [2024-07-13 15:17:36.246236] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:05.724 [2024-07-13 15:17:36.246325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980418 ] 00:06:05.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.724 [2024-07-13 15:17:36.278781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.724 [2024-07-13 15:17:36.312742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.724 [2024-07-13 15:17:36.403864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.724 [2024-07-13 15:17:36.403918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.724 [2024-07-13 15:17:36.403937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=980426 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 980426 /var/tmp/spdk2.sock 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 980426 /var/tmp/spdk2.sock 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 980426 /var/tmp/spdk2.sock 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 980426 ']' 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.982 15:17:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.982 [2024-07-13 15:17:36.716741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:05.982 [2024-07-13 15:17:36.716823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980426 ] 00:06:05.982 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.238 [2024-07-13 15:17:36.752001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.238 [2024-07-13 15:17:36.807014] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 980418 has claimed it. 00:06:06.238 [2024-07-13 15:17:36.807070] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (980426) - No such process 00:06:06.804 ERROR: process (pid: 980426) is no longer running 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 980418 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 980418 ']' 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 980418 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980418 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980418' 00:06:06.804 killing process with pid 980418 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 980418 00:06:06.804 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 980418 00:06:07.370 00:06:07.370 real 0m1.654s 00:06:07.370 user 0m4.435s 00:06:07.370 sys 0m0.463s 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.370 ************************************ 00:06:07.370 END TEST locking_overlapped_coremask 00:06:07.370 ************************************ 00:06:07.370 15:17:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:07.370 15:17:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.370 15:17:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.370 15:17:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.370 15:17:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.370 ************************************ 00:06:07.370 START TEST locking_overlapped_coremask_via_rpc 00:06:07.370 ************************************ 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=980594 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 980594 /var/tmp/spdk.sock 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 980594 ']' 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.370 15:17:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.370 [2024-07-13 15:17:37.950170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:07.370 [2024-07-13 15:17:37.950237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980594 ] 00:06:07.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.370 [2024-07-13 15:17:37.980107] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.370 [2024-07-13 15:17:38.007517] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.370 [2024-07-13 15:17:38.007542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.370 [2024-07-13 15:17:38.097172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.370 [2024-07-13 15:17:38.097228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.370 [2024-07-13 15:17:38.097230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=980718 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 980718 /var/tmp/spdk2.sock 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 980718 ']' 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.628 15:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.628 [2024-07-13 15:17:38.387012] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:07.628 [2024-07-13 15:17:38.387098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980718 ] 00:06:07.886 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.886 [2024-07-13 15:17:38.423900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.886 [2024-07-13 15:17:38.479196] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.886 [2024-07-13 15:17:38.479222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.143 [2024-07-13 15:17:38.653239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.143 [2024-07-13 15:17:38.654964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.143 [2024-07-13 15:17:38.654967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.708 [2024-07-13 15:17:39.358966] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 980594 has claimed it. 00:06:08.708 request: 00:06:08.708 { 00:06:08.708 "method": "framework_enable_cpumask_locks", 00:06:08.708 "req_id": 1 00:06:08.708 } 00:06:08.708 Got JSON-RPC error response 00:06:08.708 response: 00:06:08.708 { 00:06:08.708 "code": -32603, 00:06:08.708 "message": "Failed to claim CPU core: 2" 00:06:08.708 } 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 980594 /var/tmp/spdk.sock 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 980594 ']' 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.708 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 980718 /var/tmp/spdk2.sock 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 980718 ']' 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.966 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.224 00:06:09.224 real 0m1.967s 00:06:09.224 user 0m1.041s 00:06:09.224 sys 0m0.172s 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.224 15:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.224 ************************************ 00:06:09.224 END TEST locking_overlapped_coremask_via_rpc 00:06:09.224 ************************************ 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:09.224 15:17:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.224 15:17:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 980594 ]] 00:06:09.224 15:17:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 980594 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 980594 ']' 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 980594 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980594 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980594' 00:06:09.224 killing process with pid 980594 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 980594 00:06:09.224 15:17:39 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 980594 00:06:09.790 15:17:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 980718 ]] 00:06:09.790 15:17:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 980718 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 980718 ']' 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 980718 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980718 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980718' 00:06:09.790 killing process with pid 980718 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 980718 00:06:09.790 15:17:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 980718 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 980594 ]] 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 980594 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 980594 ']' 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 980594 00:06:10.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (980594) - No such process 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 980594 is not found' 00:06:10.048 Process with pid 980594 is not found 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 980718 ]] 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 980718 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 980718 ']' 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 980718 00:06:10.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (980718) - No such process 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 980718 is not found' 00:06:10.048 Process with pid 980718 is not found 00:06:10.048 15:17:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.048 00:06:10.048 real 0m15.670s 00:06:10.048 user 0m27.365s 00:06:10.048 sys 0m5.358s 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.048 15:17:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.048 ************************************ 00:06:10.048 END TEST cpu_locks 00:06:10.048 ************************************ 00:06:10.048 15:17:40 event -- common/autotest_common.sh@1142 -- # return 0 00:06:10.048 00:06:10.048 real 0m39.438s 00:06:10.048 user 1m15.395s 00:06:10.048 sys 0m9.289s 00:06:10.048 15:17:40 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.048 15:17:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.048 ************************************ 00:06:10.048 END TEST event 00:06:10.048 ************************************ 00:06:10.048 15:17:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.048 15:17:40 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.048 15:17:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.048 15:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.048 15:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.306 ************************************ 00:06:10.306 START TEST thread 00:06:10.306 ************************************ 00:06:10.306 15:17:40 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.306 * Looking for test storage... 00:06:10.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:10.306 15:17:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.306 15:17:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:10.306 15:17:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.306 15:17:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.306 ************************************ 00:06:10.306 START TEST thread_poller_perf 00:06:10.306 ************************************ 00:06:10.306 15:17:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.306 [2024-07-13 15:17:40.911590] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:10.306 [2024-07-13 15:17:40.911660] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981093 ] 00:06:10.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.306 [2024-07-13 15:17:40.944588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.306 [2024-07-13 15:17:40.971519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.306 [2024-07-13 15:17:41.059901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.306 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.676 ====================================== 00:06:11.676 busy:2710962189 (cyc) 00:06:11.676 total_run_count: 293000 00:06:11.676 tsc_hz: 2700000000 (cyc) 00:06:11.676 ====================================== 00:06:11.676 poller_cost: 9252 (cyc), 3426 (nsec) 00:06:11.676 00:06:11.676 real 0m1.254s 00:06:11.676 user 0m1.159s 00:06:11.676 sys 0m0.089s 00:06:11.676 15:17:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.676 15:17:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.676 ************************************ 00:06:11.676 END TEST thread_poller_perf 00:06:11.676 ************************************ 00:06:11.676 15:17:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:11.676 15:17:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.677 15:17:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:11.677 15:17:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.677 15:17:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.677 ************************************ 00:06:11.677 START TEST thread_poller_perf 00:06:11.677 ************************************ 00:06:11.677 15:17:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.677 [2024-07-13 15:17:42.215803] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:11.677 [2024-07-13 15:17:42.215919] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981247 ] 00:06:11.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.677 [2024-07-13 15:17:42.247469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.677 [2024-07-13 15:17:42.279446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.677 [2024-07-13 15:17:42.369342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.677 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.093 ====================================== 00:06:13.093 busy:2702445067 (cyc) 00:06:13.093 total_run_count: 3863000 00:06:13.093 tsc_hz: 2700000000 (cyc) 00:06:13.093 ====================================== 00:06:13.093 poller_cost: 699 (cyc), 258 (nsec) 00:06:13.093 00:06:13.093 real 0m1.253s 00:06:13.093 user 0m1.169s 00:06:13.093 sys 0m0.078s 00:06:13.093 15:17:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.093 15:17:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.093 ************************************ 00:06:13.093 END TEST thread_poller_perf 00:06:13.093 ************************************ 00:06:13.093 15:17:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:13.093 15:17:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.093 00:06:13.093 real 0m2.659s 00:06:13.093 user 0m2.391s 00:06:13.093 sys 0m0.268s 00:06:13.093 15:17:43 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.093 15:17:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.093 ************************************ 00:06:13.093 END TEST thread 00:06:13.093 ************************************ 00:06:13.093 15:17:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.093 15:17:43 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.093 15:17:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.093 15:17:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.093 15:17:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.093 ************************************ 00:06:13.093 START TEST accel 00:06:13.093 ************************************ 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.093 * Looking for test storage... 00:06:13.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:13.093 15:17:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:13.093 15:17:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:13.093 15:17:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.093 15:17:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=981438 00:06:13.093 15:17:43 accel -- accel/accel.sh@63 -- # waitforlisten 981438 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@829 -- # '[' -z 981438 ']' 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.093 15:17:43 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.093 15:17:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.093 15:17:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.093 15:17:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.093 15:17:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.093 15:17:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.093 15:17:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.093 15:17:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.093 15:17:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.093 15:17:43 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.093 [2024-07-13 15:17:43.635278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:13.093 [2024-07-13 15:17:43.635361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981438 ] 00:06:13.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.093 [2024-07-13 15:17:43.665822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.093 [2024-07-13 15:17:43.695970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.093 [2024-07-13 15:17:43.786893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.350 15:17:44 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.350 15:17:44 accel -- common/autotest_common.sh@862 -- # return 0 00:06:13.350 15:17:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:13.350 15:17:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:13.350 15:17:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:13.350 15:17:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:13.350 15:17:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:13.350 15:17:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:13.350 15:17:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:13.350 15:17:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.350 15:17:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.350 15:17:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.350 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.350 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.350 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.350 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.350 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.350 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.350 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.350 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.350 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.350 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.351 15:17:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.351 15:17:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.351 15:17:44 accel -- accel/accel.sh@75 -- # killprocess 981438 00:06:13.351 15:17:44 accel -- common/autotest_common.sh@948 -- # '[' -z 981438 ']' 00:06:13.351 15:17:44 accel -- common/autotest_common.sh@952 -- # kill -0 981438 00:06:13.351 15:17:44 accel -- common/autotest_common.sh@953 -- # uname 00:06:13.351 15:17:44 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.351 15:17:44 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 981438 00:06:13.608 15:17:44 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.608 15:17:44 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.608 15:17:44 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 981438' 00:06:13.608 killing process with pid 981438 00:06:13.608 15:17:44 accel -- common/autotest_common.sh@967 -- # kill 981438 00:06:13.608 15:17:44 accel -- common/autotest_common.sh@972 -- # wait 981438 00:06:13.865 15:17:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:13.865 15:17:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.865 15:17:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:13.865 15:17:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:13.865 15:17:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.865 15:17:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.865 15:17:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.865 15:17:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.865 ************************************ 00:06:13.865 START TEST accel_missing_filename 00:06:13.865 ************************************ 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.865 15:17:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:13.865 15:17:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:14.123 [2024-07-13 15:17:44.632723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:14.123 [2024-07-13 15:17:44.632789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981609 ] 00:06:14.123 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.123 [2024-07-13 15:17:44.665830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.123 [2024-07-13 15:17:44.696353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.123 [2024-07-13 15:17:44.789500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.123 [2024-07-13 15:17:44.850732] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.381 [2024-07-13 15:17:44.931235] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:14.381 A filename is required. 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.381 00:06:14.381 real 0m0.394s 00:06:14.381 user 0m0.288s 00:06:14.381 sys 0m0.141s 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.381 15:17:45 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:14.381 ************************************ 00:06:14.381 END TEST accel_missing_filename 00:06:14.381 ************************************ 00:06:14.381 15:17:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.381 15:17:45 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.381 15:17:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:14.381 15:17:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.381 15:17:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.381 ************************************ 00:06:14.381 START TEST accel_compress_verify 00:06:14.381 ************************************ 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.381 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:14.381 15:17:45 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:14.381 [2024-07-13 15:17:45.070682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:14.381 [2024-07-13 15:17:45.070750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981753 ] 00:06:14.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.381 [2024-07-13 15:17:45.107303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.381 [2024-07-13 15:17:45.138198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.639 [2024-07-13 15:17:45.230342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.639 [2024-07-13 15:17:45.286878] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.639 [2024-07-13 15:17:45.371326] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:14.900 00:06:14.900 Compression does not support the verify option, aborting. 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.900 00:06:14.900 real 0m0.398s 00:06:14.900 user 0m0.294s 00:06:14.900 sys 0m0.139s 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.900 15:17:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 END TEST accel_compress_verify 00:06:14.900 ************************************ 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.900 15:17:45 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 START TEST accel_wrong_workload 00:06:14.900 ************************************ 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:14.900 15:17:45 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:14.900 Unsupported workload type: foobar 00:06:14.900 [2024-07-13 15:17:45.511340] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:14.900 accel_perf options: 00:06:14.900 [-h help message] 00:06:14.900 [-q queue depth per core] 00:06:14.900 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.900 [-T number of threads per core 00:06:14.900 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.900 [-t time in seconds] 00:06:14.900 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.900 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:14.900 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.900 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.900 [-S for crc32c workload, use this seed value (default 0) 00:06:14.900 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.900 [-f for fill workload, use this BYTE value (default 255) 00:06:14.900 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.900 [-y verify result if this switch is on] 00:06:14.900 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.900 Can be used to spread operations across a wider range of memory. 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.900 00:06:14.900 real 0m0.021s 00:06:14.900 user 0m0.011s 00:06:14.900 sys 0m0.010s 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.900 15:17:45 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 END TEST accel_wrong_workload 00:06:14.900 ************************************ 00:06:14.900 Error: writing output failed: Broken pipe 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.900 15:17:45 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.900 15:17:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 START TEST accel_negative_buffers 00:06:14.900 ************************************ 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:14.900 15:17:45 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:14.900 -x option must be non-negative. 00:06:14.900 [2024-07-13 15:17:45.583374] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:14.900 accel_perf options: 00:06:14.900 [-h help message] 00:06:14.900 [-q queue depth per core] 00:06:14.900 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:14.900 [-T number of threads per core 00:06:14.900 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:14.900 [-t time in seconds] 00:06:14.900 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:14.900 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:14.900 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:14.900 [-l for compress/decompress workloads, name of uncompressed input file 00:06:14.900 [-S for crc32c workload, use this seed value (default 0) 00:06:14.900 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:14.900 [-f for fill workload, use this BYTE value (default 255) 00:06:14.900 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:14.900 [-y verify result if this switch is on] 00:06:14.900 [-a tasks to allocate per core (default: same value as -q)] 00:06:14.900 Can be used to spread operations across a wider range of memory. 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.900 00:06:14.900 real 0m0.024s 00:06:14.900 user 0m0.011s 00:06:14.900 sys 0m0.013s 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.900 15:17:45 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:14.900 ************************************ 00:06:14.900 END TEST accel_negative_buffers 00:06:14.900 ************************************ 00:06:14.901 Error: writing output failed: Broken pipe 00:06:14.901 15:17:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.901 15:17:45 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:14.901 15:17:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:14.901 15:17:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.901 15:17:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.901 ************************************ 00:06:14.901 START TEST accel_crc32c 00:06:14.901 ************************************ 00:06:14.901 15:17:45 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:14.901 15:17:45 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:14.901 [2024-07-13 15:17:45.644843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:14.901 [2024-07-13 15:17:45.644914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981822 ] 00:06:15.160 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.160 [2024-07-13 15:17:45.679177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.160 [2024-07-13 15:17:45.709921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.160 [2024-07-13 15:17:45.802739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.160 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.161 15:17:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:16.533 15:17:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.533 00:06:16.533 real 0m1.410s 00:06:16.533 user 0m1.266s 00:06:16.533 sys 0m0.147s 00:06:16.533 15:17:47 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.533 15:17:47 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:16.533 ************************************ 00:06:16.533 END TEST accel_crc32c 00:06:16.533 ************************************ 00:06:16.533 15:17:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.533 15:17:47 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:16.533 15:17:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.533 15:17:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.533 15:17:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.533 ************************************ 00:06:16.533 START TEST accel_crc32c_C2 00:06:16.533 ************************************ 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.533 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:16.533 [2024-07-13 15:17:47.103711] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:16.533 [2024-07-13 15:17:47.103773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981979 ] 00:06:16.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.533 [2024-07-13 15:17:47.135626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.533 [2024-07-13 15:17:47.165691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.533 [2024-07-13 15:17:47.262254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:16.791 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:16.792 15:17:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.725 00:06:17.725 real 0m1.400s 00:06:17.725 user 0m1.256s 00:06:17.725 sys 0m0.146s 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.725 15:17:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:17.725 ************************************ 00:06:17.725 END TEST accel_crc32c_C2 00:06:17.725 ************************************ 00:06:17.983 15:17:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.983 15:17:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:17.983 15:17:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.983 15:17:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.983 15:17:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.983 ************************************ 00:06:17.983 START TEST accel_copy 00:06:17.983 ************************************ 00:06:17.983 15:17:48 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:17.983 15:17:48 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:17.983 [2024-07-13 15:17:48.557139] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:17.983 [2024-07-13 15:17:48.557214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982247 ] 00:06:17.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.983 [2024-07-13 15:17:48.589342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.983 [2024-07-13 15:17:48.620372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.983 [2024-07-13 15:17:48.709665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.241 15:17:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.242 15:17:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.242 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.242 15:17:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.175 15:17:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:19.434 15:17:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.434 00:06:19.434 real 0m1.404s 00:06:19.434 user 0m1.259s 00:06:19.434 sys 0m0.147s 00:06:19.434 15:17:49 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.434 15:17:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.434 ************************************ 00:06:19.434 END TEST accel_copy 00:06:19.434 ************************************ 00:06:19.434 15:17:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.434 15:17:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.434 15:17:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:19.434 15:17:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.434 15:17:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.434 ************************************ 00:06:19.434 START TEST accel_fill 00:06:19.434 ************************************ 00:06:19.434 15:17:49 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:19.434 15:17:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:19.435 [2024-07-13 15:17:50.001859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:19.435 [2024-07-13 15:17:50.001948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982411 ] 00:06:19.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.435 [2024-07-13 15:17:50.047361] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.435 [2024-07-13 15:17:50.075723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.435 [2024-07-13 15:17:50.169557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:19.694 15:17:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:21.070 15:17:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.070 00:06:21.070 real 0m1.422s 00:06:21.070 user 0m1.267s 00:06:21.070 sys 0m0.158s 00:06:21.070 15:17:51 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.070 15:17:51 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:21.070 ************************************ 00:06:21.070 END TEST accel_fill 00:06:21.070 ************************************ 00:06:21.070 15:17:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.070 15:17:51 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.070 15:17:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.070 15:17:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.070 15:17:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.070 ************************************ 00:06:21.070 START TEST accel_copy_crc32c 00:06:21.070 ************************************ 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:21.070 [2024-07-13 15:17:51.470454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:21.070 [2024-07-13 15:17:51.470517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982567 ] 00:06:21.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.070 [2024-07-13 15:17:51.503933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.070 [2024-07-13 15:17:51.534513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.070 [2024-07-13 15:17:51.627768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.070 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.071 15:17:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.443 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.444 00:06:22.444 real 0m1.409s 00:06:22.444 user 0m1.264s 00:06:22.444 sys 0m0.148s 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.444 15:17:52 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:22.444 ************************************ 00:06:22.444 END TEST accel_copy_crc32c 00:06:22.444 ************************************ 00:06:22.444 15:17:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.444 15:17:52 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.444 15:17:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.444 15:17:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.444 15:17:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.444 ************************************ 00:06:22.444 START TEST accel_copy_crc32c_C2 00:06:22.444 ************************************ 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.444 15:17:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:22.444 [2024-07-13 15:17:52.923705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:22.444 [2024-07-13 15:17:52.923767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982733 ] 00:06:22.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.444 [2024-07-13 15:17:52.956263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.444 [2024-07-13 15:17:52.986674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.444 [2024-07-13 15:17:53.078882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.444 15:17:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.818 00:06:23.818 real 0m1.408s 00:06:23.818 user 0m1.268s 00:06:23.818 sys 0m0.143s 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.818 15:17:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:23.818 ************************************ 00:06:23.818 END TEST accel_copy_crc32c_C2 00:06:23.818 ************************************ 00:06:23.818 15:17:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.818 15:17:54 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:23.818 15:17:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:23.818 15:17:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.818 15:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.818 ************************************ 00:06:23.818 START TEST accel_dualcast 00:06:23.818 ************************************ 00:06:23.818 15:17:54 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:23.818 15:17:54 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:23.818 15:17:54 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:23.818 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:23.819 15:17:54 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:23.819 [2024-07-13 15:17:54.381956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:23.819 [2024-07-13 15:17:54.382020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid982996 ] 00:06:23.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.819 [2024-07-13 15:17:54.414259] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.819 [2024-07-13 15:17:54.441345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.819 [2024-07-13 15:17:54.534349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.077 15:17:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:25.011 15:17:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.011 00:06:25.011 real 0m1.396s 00:06:25.011 user 0m1.257s 00:06:25.011 sys 0m0.141s 00:06:25.011 15:17:55 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.011 15:17:55 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:25.011 ************************************ 00:06:25.011 END TEST accel_dualcast 00:06:25.011 ************************************ 00:06:25.270 15:17:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.270 15:17:55 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:25.270 15:17:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.270 15:17:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.270 15:17:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.270 ************************************ 00:06:25.270 START TEST accel_compare 00:06:25.270 ************************************ 00:06:25.270 15:17:55 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:25.270 15:17:55 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:25.270 [2024-07-13 15:17:55.823644] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:25.270 [2024-07-13 15:17:55.823708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983152 ] 00:06:25.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.270 [2024-07-13 15:17:55.855198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.270 [2024-07-13 15:17:55.887108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.270 [2024-07-13 15:17:55.978761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.528 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.529 15:17:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:26.464 15:17:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.464 00:06:26.464 real 0m1.411s 00:06:26.464 user 0m1.266s 00:06:26.464 sys 0m0.147s 00:06:26.464 15:17:57 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.464 15:17:57 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:26.464 ************************************ 00:06:26.464 END TEST accel_compare 00:06:26.464 ************************************ 00:06:26.722 15:17:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.722 15:17:57 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:26.722 15:17:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.722 15:17:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.722 15:17:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.722 ************************************ 00:06:26.722 START TEST accel_xor 00:06:26.722 ************************************ 00:06:26.722 15:17:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:26.722 15:17:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:26.722 [2024-07-13 15:17:57.284972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:26.722 [2024-07-13 15:17:57.285032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983312 ] 00:06:26.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.722 [2024-07-13 15:17:57.315149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.722 [2024-07-13 15:17:57.347089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.723 [2024-07-13 15:17:57.439003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.981 15:17:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.982 15:17:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:27.944 15:17:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.944 00:06:27.944 real 0m1.408s 00:06:27.944 user 0m1.270s 00:06:27.944 sys 0m0.141s 00:06:27.944 15:17:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.944 15:17:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:27.944 ************************************ 00:06:27.944 END TEST accel_xor 00:06:27.944 ************************************ 00:06:27.944 15:17:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.944 15:17:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:27.944 15:17:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.944 15:17:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.944 15:17:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.203 ************************************ 00:06:28.203 START TEST accel_xor 00:06:28.203 ************************************ 00:06:28.203 15:17:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:28.203 [2024-07-13 15:17:58.735355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:28.203 [2024-07-13 15:17:58.735416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983581 ] 00:06:28.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.203 [2024-07-13 15:17:58.767337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.203 [2024-07-13 15:17:58.797649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.203 [2024-07-13 15:17:58.891073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.203 15:17:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.578 15:18:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.578 15:18:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.578 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.578 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:29.579 15:18:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.579 00:06:29.579 real 0m1.397s 00:06:29.579 user 0m1.256s 00:06:29.579 sys 0m0.143s 00:06:29.579 15:18:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.579 15:18:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:29.579 ************************************ 00:06:29.579 END TEST accel_xor 00:06:29.579 ************************************ 00:06:29.579 15:18:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.579 15:18:00 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:29.579 15:18:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:29.579 15:18:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.579 15:18:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.579 ************************************ 00:06:29.579 START TEST accel_dif_verify 00:06:29.579 ************************************ 00:06:29.579 15:18:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:29.579 15:18:00 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:29.579 [2024-07-13 15:18:00.183346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:29.579 [2024-07-13 15:18:00.183417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983764 ] 00:06:29.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.579 [2024-07-13 15:18:00.215916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.579 [2024-07-13 15:18:00.245927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.838 [2024-07-13 15:18:00.344913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.838 15:18:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:31.210 15:18:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.210 00:06:31.210 real 0m1.398s 00:06:31.210 user 0m1.253s 00:06:31.210 sys 0m0.149s 00:06:31.210 15:18:01 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.210 15:18:01 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.210 ************************************ 00:06:31.210 END TEST accel_dif_verify 00:06:31.210 ************************************ 00:06:31.210 15:18:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.210 15:18:01 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:31.210 15:18:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:31.210 15:18:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.211 15:18:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.211 ************************************ 00:06:31.211 START TEST accel_dif_generate 00:06:31.211 ************************************ 00:06:31.211 15:18:01 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:31.211 [2024-07-13 15:18:01.627234] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:31.211 [2024-07-13 15:18:01.627297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984001 ] 00:06:31.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.211 [2024-07-13 15:18:01.659733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.211 [2024-07-13 15:18:01.689625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.211 [2024-07-13 15:18:01.781889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.211 15:18:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:32.584 15:18:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.584 00:06:32.584 real 0m1.409s 00:06:32.584 user 0m1.275s 00:06:32.584 sys 0m0.137s 00:06:32.584 15:18:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.584 15:18:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:32.584 ************************************ 00:06:32.584 END TEST accel_dif_generate 00:06:32.584 ************************************ 00:06:32.584 15:18:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.584 15:18:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:32.584 15:18:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:32.584 15:18:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.584 15:18:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.584 ************************************ 00:06:32.584 START TEST accel_dif_generate_copy 00:06:32.584 ************************************ 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:32.584 [2024-07-13 15:18:03.088274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:32.584 [2024-07-13 15:18:03.088339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984174 ] 00:06:32.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.584 [2024-07-13 15:18:03.121341] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.584 [2024-07-13 15:18:03.154155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.584 [2024-07-13 15:18:03.250307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.584 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.585 15:18:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.957 00:06:33.957 real 0m1.419s 00:06:33.957 user 0m1.281s 00:06:33.957 sys 0m0.139s 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.957 15:18:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.957 ************************************ 00:06:33.957 END TEST accel_dif_generate_copy 00:06:33.957 ************************************ 00:06:33.957 15:18:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.957 15:18:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:33.957 15:18:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.957 15:18:04 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:33.957 15:18:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.957 15:18:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.957 ************************************ 00:06:33.957 START TEST accel_comp 00:06:33.957 ************************************ 00:06:33.957 15:18:04 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:33.957 15:18:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:33.957 [2024-07-13 15:18:04.549822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:33.957 [2024-07-13 15:18:04.549894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984440 ] 00:06:33.957 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.957 [2024-07-13 15:18:04.583477] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.957 [2024-07-13 15:18:04.613150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.957 [2024-07-13 15:18:04.707371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.216 15:18:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:35.589 15:18:05 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.589 00:06:35.589 real 0m1.407s 00:06:35.589 user 0m1.272s 00:06:35.589 sys 0m0.137s 00:06:35.589 15:18:05 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.589 15:18:05 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:35.589 ************************************ 00:06:35.589 END TEST accel_comp 00:06:35.589 ************************************ 00:06:35.589 15:18:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.589 15:18:05 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.589 15:18:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:35.589 15:18:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.589 15:18:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.589 ************************************ 00:06:35.589 START TEST accel_decomp 00:06:35.589 ************************************ 00:06:35.589 15:18:05 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.589 15:18:05 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.590 15:18:05 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.590 15:18:05 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:35.590 15:18:05 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:35.590 [2024-07-13 15:18:05.997687] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:35.590 [2024-07-13 15:18:05.997750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984597 ] 00:06:35.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.590 [2024-07-13 15:18:06.029529] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.590 [2024-07-13 15:18:06.059579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.590 [2024-07-13 15:18:06.162091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:35.590 15:18:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.964 15:18:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.964 00:06:36.964 real 0m1.407s 00:06:36.964 user 0m1.264s 00:06:36.964 sys 0m0.145s 00:06:36.964 15:18:07 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.964 15:18:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:36.964 ************************************ 00:06:36.964 END TEST accel_decomp 00:06:36.964 ************************************ 00:06:36.964 15:18:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.964 15:18:07 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.964 15:18:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:36.964 15:18:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.964 15:18:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.964 ************************************ 00:06:36.964 START TEST accel_decomp_full 00:06:36.964 ************************************ 00:06:36.964 15:18:07 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:36.964 [2024-07-13 15:18:07.453130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:36.964 [2024-07-13 15:18:07.453219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984915 ] 00:06:36.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.964 [2024-07-13 15:18:07.486693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.964 [2024-07-13 15:18:07.519447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.964 [2024-07-13 15:18:07.612569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:36.964 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.965 15:18:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.337 15:18:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.337 00:06:38.337 real 0m1.414s 00:06:38.337 user 0m1.269s 00:06:38.337 sys 0m0.148s 00:06:38.337 15:18:08 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.337 15:18:08 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:38.337 ************************************ 00:06:38.337 END TEST accel_decomp_full 00:06:38.337 ************************************ 00:06:38.337 15:18:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.337 15:18:08 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.337 15:18:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:38.337 15:18:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.337 15:18:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.337 ************************************ 00:06:38.337 START TEST accel_decomp_mcore 00:06:38.337 ************************************ 00:06:38.337 15:18:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.337 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:38.337 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:38.337 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.337 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.337 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:38.338 15:18:08 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:38.338 [2024-07-13 15:18:08.908945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:38.338 [2024-07-13 15:18:08.909015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985512 ] 00:06:38.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.338 [2024-07-13 15:18:08.940610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.338 [2024-07-13 15:18:08.970287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.338 [2024-07-13 15:18:09.063883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.338 [2024-07-13 15:18:09.063918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.338 [2024-07-13 15:18:09.064030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.338 [2024-07-13 15:18:09.064033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.622 15:18:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.555 00:06:39.555 real 0m1.409s 00:06:39.555 user 0m4.703s 00:06:39.555 sys 0m0.146s 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.555 15:18:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:39.555 ************************************ 00:06:39.555 END TEST accel_decomp_mcore 00:06:39.555 ************************************ 00:06:39.813 15:18:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.813 15:18:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.813 15:18:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:39.813 15:18:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.813 15:18:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.813 ************************************ 00:06:39.814 START TEST accel_decomp_full_mcore 00:06:39.814 ************************************ 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:39.814 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:39.814 [2024-07-13 15:18:10.360746] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:39.814 [2024-07-13 15:18:10.360811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985694 ] 00:06:39.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.814 [2024-07-13 15:18:10.393522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.814 [2024-07-13 15:18:10.423689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.814 [2024-07-13 15:18:10.519928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.814 [2024-07-13 15:18:10.519987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.814 [2024-07-13 15:18:10.520053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.814 [2024-07-13 15:18:10.520056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.072 15:18:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.006 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.007 00:06:41.007 real 0m1.413s 00:06:41.007 user 0m4.728s 00:06:41.007 sys 0m0.149s 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.007 15:18:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:41.007 ************************************ 00:06:41.007 END TEST accel_decomp_full_mcore 00:06:41.007 ************************************ 00:06:41.266 15:18:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.266 15:18:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.266 15:18:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:41.266 15:18:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.266 15:18:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.266 ************************************ 00:06:41.266 START TEST accel_decomp_mthread 00:06:41.266 ************************************ 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:41.266 15:18:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:41.266 [2024-07-13 15:18:11.825992] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:41.266 [2024-07-13 15:18:11.826056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid985855 ] 00:06:41.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.266 [2024-07-13 15:18:11.858478] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.266 [2024-07-13 15:18:11.890545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.266 [2024-07-13 15:18:11.981107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.525 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.526 15:18:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.458 00:06:42.458 real 0m1.414s 00:06:42.458 user 0m1.269s 00:06:42.458 sys 0m0.148s 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.458 15:18:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:42.458 ************************************ 00:06:42.458 END TEST accel_decomp_mthread 00:06:42.458 ************************************ 00:06:42.736 15:18:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.736 15:18:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.736 15:18:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:42.736 15:18:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.736 15:18:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.736 ************************************ 00:06:42.736 START TEST accel_decomp_full_mthread 00:06:42.736 ************************************ 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:42.736 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:42.736 [2024-07-13 15:18:13.290349] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:42.736 [2024-07-13 15:18:13.290413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986017 ] 00:06:42.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.736 [2024-07-13 15:18:13.322418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.736 [2024-07-13 15:18:13.355659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.736 [2024-07-13 15:18:13.454446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.001 15:18:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.376 00:06:44.376 real 0m1.462s 00:06:44.376 user 0m1.307s 00:06:44.376 sys 0m0.157s 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.376 15:18:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:44.376 ************************************ 00:06:44.376 END TEST accel_decomp_full_mthread 00:06:44.376 ************************************ 00:06:44.376 15:18:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.376 15:18:14 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:44.376 15:18:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:44.376 15:18:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:44.376 15:18:14 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:44.376 15:18:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.376 15:18:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.376 15:18:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.376 15:18:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.376 15:18:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.376 15:18:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.376 15:18:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.376 15:18:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:44.376 15:18:14 accel -- accel/accel.sh@41 -- # jq -r . 00:06:44.376 ************************************ 00:06:44.376 START TEST accel_dif_functional_tests 00:06:44.376 ************************************ 00:06:44.376 15:18:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:44.376 [2024-07-13 15:18:14.823720] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:44.376 [2024-07-13 15:18:14.823797] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986289 ] 00:06:44.376 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.376 [2024-07-13 15:18:14.853675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.376 [2024-07-13 15:18:14.885909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.376 [2024-07-13 15:18:14.980336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.376 [2024-07-13 15:18:14.980389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.376 [2024-07-13 15:18:14.980392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.376 00:06:44.376 00:06:44.376 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.376 http://cunit.sourceforge.net/ 00:06:44.376 00:06:44.376 00:06:44.376 Suite: accel_dif 00:06:44.376 Test: verify: DIF generated, GUARD check ...passed 00:06:44.376 Test: verify: DIF generated, APPTAG check ...passed 00:06:44.376 Test: verify: DIF generated, REFTAG check ...passed 00:06:44.376 Test: verify: DIF not generated, GUARD check ...[2024-07-13 15:18:15.069110] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:44.376 passed 00:06:44.376 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 15:18:15.069189] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:44.376 passed 00:06:44.376 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 15:18:15.069223] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:44.376 passed 00:06:44.376 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:44.376 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 15:18:15.069283] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:44.376 passed 00:06:44.376 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:44.376 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:44.376 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:44.376 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 15:18:15.069410] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:44.376 passed 00:06:44.376 Test: verify copy: DIF generated, GUARD check ...passed 00:06:44.376 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:44.376 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:44.376 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 15:18:15.069552] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:44.376 passed 00:06:44.376 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 15:18:15.069593] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:44.376 passed 00:06:44.376 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 15:18:15.069625] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:44.376 passed 00:06:44.376 Test: generate copy: DIF generated, GUARD check ...passed 00:06:44.376 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:44.376 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:44.376 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:44.376 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:44.376 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:44.376 Test: generate copy: iovecs-len validate ...[2024-07-13 15:18:15.069837] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:44.376 passed 00:06:44.376 Test: generate copy: buffer alignment validate ...passed 00:06:44.376 00:06:44.376 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.376 suites 1 1 n/a 0 0 00:06:44.376 tests 26 26 26 0 0 00:06:44.376 asserts 115 115 115 0 n/a 00:06:44.376 00:06:44.376 Elapsed time = 0.002 seconds 00:06:44.636 00:06:44.636 real 0m0.487s 00:06:44.636 user 0m0.731s 00:06:44.636 sys 0m0.174s 00:06:44.636 15:18:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.636 15:18:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:44.636 ************************************ 00:06:44.636 END TEST accel_dif_functional_tests 00:06:44.636 ************************************ 00:06:44.636 15:18:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.636 00:06:44.636 real 0m31.766s 00:06:44.636 user 0m35.091s 00:06:44.636 sys 0m4.593s 00:06:44.636 15:18:15 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.636 15:18:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.636 ************************************ 00:06:44.636 END TEST accel 00:06:44.636 ************************************ 00:06:44.636 15:18:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:44.636 15:18:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:44.636 15:18:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.636 15:18:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.636 15:18:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.636 ************************************ 00:06:44.636 START TEST accel_rpc 00:06:44.636 ************************************ 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:44.636 * Looking for test storage... 00:06:44.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:44.636 15:18:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.636 15:18:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=986365 00:06:44.636 15:18:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:44.636 15:18:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 986365 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 986365 ']' 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.636 15:18:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.895 [2024-07-13 15:18:15.441661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:44.895 [2024-07-13 15:18:15.441748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986365 ] 00:06:44.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.895 [2024-07-13 15:18:15.472783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.895 [2024-07-13 15:18:15.499829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.895 [2024-07-13 15:18:15.584301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.895 15:18:15 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.895 15:18:15 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.895 15:18:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.895 15:18:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.895 15:18:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.895 15:18:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.895 15:18:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.895 15:18:15 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.895 15:18:15 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.895 15:18:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.153 ************************************ 00:06:45.153 START TEST accel_assign_opcode 00:06:45.153 ************************************ 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.153 [2024-07-13 15:18:15.668958] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.153 [2024-07-13 15:18:15.676952] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.153 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.411 software 00:06:45.411 00:06:45.411 real 0m0.295s 00:06:45.411 user 0m0.041s 00:06:45.411 sys 0m0.008s 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.411 15:18:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 ************************************ 00:06:45.411 END TEST accel_assign_opcode 00:06:45.411 ************************************ 00:06:45.411 15:18:15 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:45.411 15:18:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 986365 00:06:45.411 15:18:15 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 986365 ']' 00:06:45.411 15:18:15 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 986365 00:06:45.411 15:18:15 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:45.411 15:18:15 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.411 15:18:15 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 986365 00:06:45.411 15:18:16 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.411 15:18:16 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.411 15:18:16 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 986365' 00:06:45.411 killing process with pid 986365 00:06:45.411 15:18:16 accel_rpc -- common/autotest_common.sh@967 -- # kill 986365 00:06:45.411 15:18:16 accel_rpc -- common/autotest_common.sh@972 -- # wait 986365 00:06:45.669 00:06:45.669 real 0m1.088s 00:06:45.669 user 0m1.003s 00:06:45.669 sys 0m0.446s 00:06:45.669 15:18:16 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.669 15:18:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.669 ************************************ 00:06:45.669 END TEST accel_rpc 00:06:45.669 ************************************ 00:06:45.927 15:18:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.927 15:18:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.927 15:18:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.927 15:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.927 15:18:16 -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 ************************************ 00:06:45.927 START TEST app_cmdline 00:06:45.927 ************************************ 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.927 * Looking for test storage... 00:06:45.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.927 15:18:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.927 15:18:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=986569 00:06:45.927 15:18:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.927 15:18:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 986569 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 986569 ']' 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.927 15:18:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 [2024-07-13 15:18:16.582628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:06:45.927 [2024-07-13 15:18:16.582715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986569 ] 00:06:45.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.927 [2024-07-13 15:18:16.613582] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.927 [2024-07-13 15:18:16.640293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.186 [2024-07-13 15:18:16.726706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.443 15:18:16 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.443 15:18:16 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:46.443 15:18:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.443 { 00:06:46.443 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:46.443 "fields": { 00:06:46.443 "major": 24, 00:06:46.443 "minor": 9, 00:06:46.443 "patch": 0, 00:06:46.443 "suffix": "-pre", 00:06:46.443 "commit": "719d03c6a" 00:06:46.443 } 00:06:46.443 } 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.701 15:18:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.701 15:18:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.701 15:18:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.701 15:18:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.702 15:18:17 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.960 request: 00:06:46.960 { 00:06:46.960 "method": "env_dpdk_get_mem_stats", 00:06:46.960 "req_id": 1 00:06:46.960 } 00:06:46.960 Got JSON-RPC error response 00:06:46.960 response: 00:06:46.960 { 00:06:46.960 "code": -32601, 00:06:46.960 "message": "Method not found" 00:06:46.960 } 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.960 15:18:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 986569 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 986569 ']' 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 986569 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 986569 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 986569' 00:06:46.960 killing process with pid 986569 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@967 -- # kill 986569 00:06:46.960 15:18:17 app_cmdline -- common/autotest_common.sh@972 -- # wait 986569 00:06:47.218 00:06:47.218 real 0m1.490s 00:06:47.218 user 0m1.837s 00:06:47.218 sys 0m0.461s 00:06:47.218 15:18:17 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.218 15:18:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 ************************************ 00:06:47.218 END TEST app_cmdline 00:06:47.218 ************************************ 00:06:47.477 15:18:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.477 15:18:17 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.477 15:18:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.477 15:18:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.477 15:18:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.477 ************************************ 00:06:47.477 START TEST version 00:06:47.477 ************************************ 00:06:47.477 15:18:18 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:47.477 * Looking for test storage... 00:06:47.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.477 15:18:18 version -- app/version.sh@17 -- # get_header_version major 00:06:47.477 15:18:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # cut -f2 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.477 15:18:18 version -- app/version.sh@17 -- # major=24 00:06:47.477 15:18:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.477 15:18:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # cut -f2 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.477 15:18:18 version -- app/version.sh@18 -- # minor=9 00:06:47.477 15:18:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.477 15:18:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # cut -f2 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.477 15:18:18 version -- app/version.sh@19 -- # patch=0 00:06:47.477 15:18:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.477 15:18:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # cut -f2 00:06:47.477 15:18:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.477 15:18:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.477 15:18:18 version -- app/version.sh@22 -- # version=24.9 00:06:47.477 15:18:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.477 15:18:18 version -- app/version.sh@28 -- # version=24.9rc0 00:06:47.477 15:18:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:47.477 15:18:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.477 15:18:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:47.477 15:18:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:47.477 00:06:47.477 real 0m0.103s 00:06:47.477 user 0m0.051s 00:06:47.477 sys 0m0.074s 00:06:47.477 15:18:18 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.477 15:18:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.477 ************************************ 00:06:47.477 END TEST version 00:06:47.477 ************************************ 00:06:47.477 15:18:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.477 15:18:18 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@198 -- # uname -s 00:06:47.477 15:18:18 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:47.477 15:18:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:47.477 15:18:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:47.477 15:18:18 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:47.477 15:18:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.477 15:18:18 -- common/autotest_common.sh@10 -- # set +x 00:06:47.477 15:18:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:47.477 15:18:18 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:47.477 15:18:18 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.477 15:18:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.477 15:18:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.477 15:18:18 -- common/autotest_common.sh@10 -- # set +x 00:06:47.477 ************************************ 00:06:47.477 START TEST nvmf_tcp 00:06:47.477 ************************************ 00:06:47.477 15:18:18 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.477 * Looking for test storage... 00:06:47.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.736 15:18:18 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.736 15:18:18 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.736 15:18:18 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.736 15:18:18 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.736 15:18:18 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.736 15:18:18 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.736 15:18:18 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:47.736 15:18:18 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:47.736 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:47.736 15:18:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.736 15:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.737 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:47.737 15:18:18 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.737 15:18:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.737 15:18:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.737 15:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.737 ************************************ 00:06:47.737 START TEST nvmf_example 00:06:47.737 ************************************ 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.737 * Looking for test storage... 00:06:47.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.737 15:18:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.638 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:49.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:49.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:49.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:49.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:06:49.639 00:06:49.639 --- 10.0.0.2 ping statistics --- 00:06:49.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.639 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:06:49.639 00:06:49.639 --- 10.0.0.1 ping statistics --- 00:06:49.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.639 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.639 15:18:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=988584 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 988584 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 988584 ']' 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.897 15:18:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:50.828 15:18:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:50.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.019 Initializing NVMe Controllers 00:07:03.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:03.019 Initialization complete. Launching workers. 00:07:03.019 ======================================================== 00:07:03.019 Latency(us) 00:07:03.019 Device Information : IOPS MiB/s Average min max 00:07:03.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14904.88 58.22 4293.24 874.66 22124.67 00:07:03.020 ======================================================== 00:07:03.020 Total : 14904.88 58.22 4293.24 874.66 22124.67 00:07:03.020 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.020 rmmod nvme_tcp 00:07:03.020 rmmod nvme_fabrics 00:07:03.020 rmmod nvme_keyring 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 988584 ']' 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 988584 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 988584 ']' 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 988584 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 988584 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 988584' 00:07:03.020 killing process with pid 988584 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 988584 00:07:03.020 15:18:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 988584 00:07:03.020 nvmf threads initialize successfully 00:07:03.020 bdev subsystem init successfully 00:07:03.020 created a nvmf target service 00:07:03.020 create targets's poll groups done 00:07:03.020 all subsystems of target started 00:07:03.020 nvmf target is running 00:07:03.020 all subsystems of target stopped 00:07:03.020 destroy targets's poll groups done 00:07:03.020 destroyed the nvmf target service 00:07:03.020 bdev subsystem finish successfully 00:07:03.020 nvmf threads destroy successfully 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.020 15:18:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.589 15:18:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:03.589 15:18:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:03.589 15:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.589 15:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.589 00:07:03.589 real 0m15.882s 00:07:03.589 user 0m45.359s 00:07:03.589 sys 0m3.282s 00:07:03.589 15:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.589 15:18:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.589 ************************************ 00:07:03.589 END TEST nvmf_example 00:07:03.589 ************************************ 00:07:03.589 15:18:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:03.589 15:18:34 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.589 15:18:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.589 15:18:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.589 15:18:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.589 ************************************ 00:07:03.589 START TEST nvmf_filesystem 00:07:03.589 ************************************ 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.589 * Looking for test storage... 00:07:03.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:03.589 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:03.590 #define SPDK_CONFIG_H 00:07:03.590 #define SPDK_CONFIG_APPS 1 00:07:03.590 #define SPDK_CONFIG_ARCH native 00:07:03.590 #undef SPDK_CONFIG_ASAN 00:07:03.590 #undef SPDK_CONFIG_AVAHI 00:07:03.590 #undef SPDK_CONFIG_CET 00:07:03.590 #define SPDK_CONFIG_COVERAGE 1 00:07:03.590 #define SPDK_CONFIG_CROSS_PREFIX 00:07:03.590 #undef SPDK_CONFIG_CRYPTO 00:07:03.590 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:03.590 #undef SPDK_CONFIG_CUSTOMOCF 00:07:03.590 #undef SPDK_CONFIG_DAOS 00:07:03.590 #define SPDK_CONFIG_DAOS_DIR 00:07:03.590 #define SPDK_CONFIG_DEBUG 1 00:07:03.590 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:03.590 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:03.590 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:03.590 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:03.590 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:03.590 #undef SPDK_CONFIG_DPDK_UADK 00:07:03.590 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:03.590 #define SPDK_CONFIG_EXAMPLES 1 00:07:03.590 #undef SPDK_CONFIG_FC 00:07:03.590 #define SPDK_CONFIG_FC_PATH 00:07:03.590 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:03.590 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:03.590 #undef SPDK_CONFIG_FUSE 00:07:03.590 #undef SPDK_CONFIG_FUZZER 00:07:03.590 #define SPDK_CONFIG_FUZZER_LIB 00:07:03.590 #undef SPDK_CONFIG_GOLANG 00:07:03.590 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:03.590 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:03.590 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:03.590 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:03.590 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:03.590 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:03.590 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:03.590 #define SPDK_CONFIG_IDXD 1 00:07:03.590 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:03.590 #undef SPDK_CONFIG_IPSEC_MB 00:07:03.590 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:03.590 #define SPDK_CONFIG_ISAL 1 00:07:03.590 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:03.590 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:03.590 #define SPDK_CONFIG_LIBDIR 00:07:03.590 #undef SPDK_CONFIG_LTO 00:07:03.590 #define SPDK_CONFIG_MAX_LCORES 128 00:07:03.590 #define SPDK_CONFIG_NVME_CUSE 1 00:07:03.590 #undef SPDK_CONFIG_OCF 00:07:03.590 #define SPDK_CONFIG_OCF_PATH 00:07:03.590 #define SPDK_CONFIG_OPENSSL_PATH 00:07:03.590 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:03.590 #define SPDK_CONFIG_PGO_DIR 00:07:03.590 #undef SPDK_CONFIG_PGO_USE 00:07:03.590 #define SPDK_CONFIG_PREFIX /usr/local 00:07:03.590 #undef SPDK_CONFIG_RAID5F 00:07:03.590 #undef SPDK_CONFIG_RBD 00:07:03.590 #define SPDK_CONFIG_RDMA 1 00:07:03.590 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:03.590 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:03.590 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:03.590 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:03.590 #define SPDK_CONFIG_SHARED 1 00:07:03.590 #undef SPDK_CONFIG_SMA 00:07:03.590 #define SPDK_CONFIG_TESTS 1 00:07:03.590 #undef SPDK_CONFIG_TSAN 00:07:03.590 #define SPDK_CONFIG_UBLK 1 00:07:03.590 #define SPDK_CONFIG_UBSAN 1 00:07:03.590 #undef SPDK_CONFIG_UNIT_TESTS 00:07:03.590 #undef SPDK_CONFIG_URING 00:07:03.590 #define SPDK_CONFIG_URING_PATH 00:07:03.590 #undef SPDK_CONFIG_URING_ZNS 00:07:03.590 #undef SPDK_CONFIG_USDT 00:07:03.590 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:03.590 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:03.590 #define SPDK_CONFIG_VFIO_USER 1 00:07:03.590 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:03.590 #define SPDK_CONFIG_VHOST 1 00:07:03.590 #define SPDK_CONFIG_VIRTIO 1 00:07:03.590 #undef SPDK_CONFIG_VTUNE 00:07:03.590 #define SPDK_CONFIG_VTUNE_DIR 00:07:03.590 #define SPDK_CONFIG_WERROR 1 00:07:03.590 #define SPDK_CONFIG_WPDK_DIR 00:07:03.590 #undef SPDK_CONFIG_XNVME 00:07:03.590 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:03.590 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:03.591 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 990307 ]] 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 990307 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.J20ZMz 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.J20ZMz/tests/target /tmp/spdk.J20ZMz 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53957038080 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8037670912 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:07:03.592 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8765440 00:07:03.593 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996373504 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=983040 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:03.851 * Looking for test storage... 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53957038080 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:03.851 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10252263424 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.852 15:18:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:05.790 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:05.790 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:05.790 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:05.790 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.790 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:07:05.791 00:07:05.791 --- 10.0.0.2 ping statistics --- 00:07:05.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.791 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:07:05.791 00:07:05.791 --- 10.0.0.1 ping statistics --- 00:07:05.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.791 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.791 ************************************ 00:07:05.791 START TEST nvmf_filesystem_no_in_capsule 00:07:05.791 ************************************ 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=991939 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 991939 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 991939 ']' 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.791 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.791 [2024-07-13 15:18:36.523959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:05.791 [2024-07-13 15:18:36.524056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.050 [2024-07-13 15:18:36.564990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.050 [2024-07-13 15:18:36.592246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.050 [2024-07-13 15:18:36.683524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.050 [2024-07-13 15:18:36.683585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.050 [2024-07-13 15:18:36.683612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.050 [2024-07-13 15:18:36.683627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.050 [2024-07-13 15:18:36.683638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.050 [2024-07-13 15:18:36.683720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.050 [2024-07-13 15:18:36.683787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.050 [2024-07-13 15:18:36.683837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.050 [2024-07-13 15:18:36.683840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.050 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.050 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:06.050 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.050 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.050 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 [2024-07-13 15:18:36.836779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.309 15:18:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 Malloc1 00:07:06.309 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.309 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:06.309 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.309 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.309 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.309 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.310 [2024-07-13 15:18:37.025791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:06.310 { 00:07:06.310 "name": "Malloc1", 00:07:06.310 "aliases": [ 00:07:06.310 "3940c6c6-e46f-48eb-9dc9-9f23c6c8961f" 00:07:06.310 ], 00:07:06.310 "product_name": "Malloc disk", 00:07:06.310 "block_size": 512, 00:07:06.310 "num_blocks": 1048576, 00:07:06.310 "uuid": "3940c6c6-e46f-48eb-9dc9-9f23c6c8961f", 00:07:06.310 "assigned_rate_limits": { 00:07:06.310 "rw_ios_per_sec": 0, 00:07:06.310 "rw_mbytes_per_sec": 0, 00:07:06.310 "r_mbytes_per_sec": 0, 00:07:06.310 "w_mbytes_per_sec": 0 00:07:06.310 }, 00:07:06.310 "claimed": true, 00:07:06.310 "claim_type": "exclusive_write", 00:07:06.310 "zoned": false, 00:07:06.310 "supported_io_types": { 00:07:06.310 "read": true, 00:07:06.310 "write": true, 00:07:06.310 "unmap": true, 00:07:06.310 "flush": true, 00:07:06.310 "reset": true, 00:07:06.310 "nvme_admin": false, 00:07:06.310 "nvme_io": false, 00:07:06.310 "nvme_io_md": false, 00:07:06.310 "write_zeroes": true, 00:07:06.310 "zcopy": true, 00:07:06.310 "get_zone_info": false, 00:07:06.310 "zone_management": false, 00:07:06.310 "zone_append": false, 00:07:06.310 "compare": false, 00:07:06.310 "compare_and_write": false, 00:07:06.310 "abort": true, 00:07:06.310 "seek_hole": false, 00:07:06.310 "seek_data": false, 00:07:06.310 "copy": true, 00:07:06.310 "nvme_iov_md": false 00:07:06.310 }, 00:07:06.310 "memory_domains": [ 00:07:06.310 { 00:07:06.310 "dma_device_id": "system", 00:07:06.310 "dma_device_type": 1 00:07:06.310 }, 00:07:06.310 { 00:07:06.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.310 "dma_device_type": 2 00:07:06.310 } 00:07:06.310 ], 00:07:06.310 "driver_specific": {} 00:07:06.310 } 00:07:06.310 ]' 00:07:06.310 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:06.569 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.135 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.135 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:07.135 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.135 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:07.135 15:18:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:09.660 15:18:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:09.917 15:18:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.288 ************************************ 00:07:11.288 START TEST filesystem_ext4 00:07:11.288 ************************************ 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:11.288 15:18:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.288 mke2fs 1.46.5 (30-Dec-2021) 00:07:11.288 Discarding device blocks: 0/522240 done 00:07:11.288 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:11.288 Filesystem UUID: 2cbab1e8-bd0b-4013-86a9-c0209d847be0 00:07:11.288 Superblock backups stored on blocks: 00:07:11.288 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:11.288 00:07:11.288 Allocating group tables: 0/64 done 00:07:11.288 Writing inode tables: 0/64 done 00:07:14.564 Creating journal (8192 blocks): done 00:07:14.822 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:07:14.823 00:07:14.823 15:18:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:14.823 15:18:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 991939 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.756 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.757 00:07:15.757 real 0m4.791s 00:07:15.757 user 0m0.015s 00:07:15.757 sys 0m0.056s 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:15.757 ************************************ 00:07:15.757 END TEST filesystem_ext4 00:07:15.757 ************************************ 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.757 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.015 ************************************ 00:07:16.015 START TEST filesystem_btrfs 00:07:16.015 ************************************ 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:16.015 btrfs-progs v6.6.2 00:07:16.015 See https://btrfs.readthedocs.io for more information. 00:07:16.015 00:07:16.015 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:16.015 NOTE: several default settings have changed in version 5.15, please make sure 00:07:16.015 this does not affect your deployments: 00:07:16.015 - DUP for metadata (-m dup) 00:07:16.015 - enabled no-holes (-O no-holes) 00:07:16.015 - enabled free-space-tree (-R free-space-tree) 00:07:16.015 00:07:16.015 Label: (null) 00:07:16.015 UUID: e1afd500-7353-4bb1-9788-e5fe2ba6e1cc 00:07:16.015 Node size: 16384 00:07:16.015 Sector size: 4096 00:07:16.015 Filesystem size: 510.00MiB 00:07:16.015 Block group profiles: 00:07:16.015 Data: single 8.00MiB 00:07:16.015 Metadata: DUP 32.00MiB 00:07:16.015 System: DUP 8.00MiB 00:07:16.015 SSD detected: yes 00:07:16.015 Zoned device: no 00:07:16.015 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:16.015 Runtime features: free-space-tree 00:07:16.015 Checksum: crc32c 00:07:16.015 Number of devices: 1 00:07:16.015 Devices: 00:07:16.015 ID SIZE PATH 00:07:16.015 1 510.00MiB /dev/nvme0n1p1 00:07:16.015 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:16.015 15:18:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.580 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 991939 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.581 00:07:16.581 real 0m0.772s 00:07:16.581 user 0m0.018s 00:07:16.581 sys 0m0.125s 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.581 ************************************ 00:07:16.581 END TEST filesystem_btrfs 00:07:16.581 ************************************ 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.581 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.839 ************************************ 00:07:16.839 START TEST filesystem_xfs 00:07:16.839 ************************************ 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:16.840 15:18:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:16.840 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:16.840 = sectsz=512 attr=2, projid32bit=1 00:07:16.840 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:16.840 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:16.840 data = bsize=4096 blocks=130560, imaxpct=25 00:07:16.840 = sunit=0 swidth=0 blks 00:07:16.840 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:16.840 log =internal log bsize=4096 blocks=16384, version=2 00:07:16.840 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:16.840 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:17.774 Discarding blocks...Done. 00:07:17.774 15:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:17.774 15:18:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.669 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 991939 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.670 00:07:19.670 real 0m2.915s 00:07:19.670 user 0m0.010s 00:07:19.670 sys 0m0.066s 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:19.670 ************************************ 00:07:19.670 END TEST filesystem_xfs 00:07:19.670 ************************************ 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:19.670 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 991939 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 991939 ']' 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 991939 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.927 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 991939 00:07:20.234 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.234 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.234 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 991939' 00:07:20.234 killing process with pid 991939 00:07:20.234 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 991939 00:07:20.234 15:18:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 991939 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:20.517 00:07:20.517 real 0m14.641s 00:07:20.517 user 0m56.437s 00:07:20.517 sys 0m1.999s 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.517 ************************************ 00:07:20.517 END TEST nvmf_filesystem_no_in_capsule 00:07:20.517 ************************************ 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.517 ************************************ 00:07:20.517 START TEST nvmf_filesystem_in_capsule 00:07:20.517 ************************************ 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=993905 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 993905 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 993905 ']' 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.517 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.517 [2024-07-13 15:18:51.211445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:20.517 [2024-07-13 15:18:51.211523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.517 [2024-07-13 15:18:51.250206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.517 [2024-07-13 15:18:51.277074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.775 [2024-07-13 15:18:51.367471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.775 [2024-07-13 15:18:51.367529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.775 [2024-07-13 15:18:51.367542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.775 [2024-07-13 15:18:51.367553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.775 [2024-07-13 15:18:51.367562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.775 [2024-07-13 15:18:51.367645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.775 [2024-07-13 15:18:51.367707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.775 [2024-07-13 15:18:51.367774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.775 [2024-07-13 15:18:51.367776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.775 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.775 [2024-07-13 15:18:51.523758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.776 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.776 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:20.776 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.776 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.032 Malloc1 00:07:21.032 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 [2024-07-13 15:18:51.720124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:21.033 { 00:07:21.033 "name": "Malloc1", 00:07:21.033 "aliases": [ 00:07:21.033 "dc47a58e-d55b-474d-b724-eb9ce22514ce" 00:07:21.033 ], 00:07:21.033 "product_name": "Malloc disk", 00:07:21.033 "block_size": 512, 00:07:21.033 "num_blocks": 1048576, 00:07:21.033 "uuid": "dc47a58e-d55b-474d-b724-eb9ce22514ce", 00:07:21.033 "assigned_rate_limits": { 00:07:21.033 "rw_ios_per_sec": 0, 00:07:21.033 "rw_mbytes_per_sec": 0, 00:07:21.033 "r_mbytes_per_sec": 0, 00:07:21.033 "w_mbytes_per_sec": 0 00:07:21.033 }, 00:07:21.033 "claimed": true, 00:07:21.033 "claim_type": "exclusive_write", 00:07:21.033 "zoned": false, 00:07:21.033 "supported_io_types": { 00:07:21.033 "read": true, 00:07:21.033 "write": true, 00:07:21.033 "unmap": true, 00:07:21.033 "flush": true, 00:07:21.033 "reset": true, 00:07:21.033 "nvme_admin": false, 00:07:21.033 "nvme_io": false, 00:07:21.033 "nvme_io_md": false, 00:07:21.033 "write_zeroes": true, 00:07:21.033 "zcopy": true, 00:07:21.033 "get_zone_info": false, 00:07:21.033 "zone_management": false, 00:07:21.033 "zone_append": false, 00:07:21.033 "compare": false, 00:07:21.033 "compare_and_write": false, 00:07:21.033 "abort": true, 00:07:21.033 "seek_hole": false, 00:07:21.033 "seek_data": false, 00:07:21.033 "copy": true, 00:07:21.033 "nvme_iov_md": false 00:07:21.033 }, 00:07:21.033 "memory_domains": [ 00:07:21.033 { 00:07:21.033 "dma_device_id": "system", 00:07:21.033 "dma_device_type": 1 00:07:21.033 }, 00:07:21.033 { 00:07:21.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.033 "dma_device_type": 2 00:07:21.033 } 00:07:21.033 ], 00:07:21.033 "driver_specific": {} 00:07:21.033 } 00:07:21.033 ]' 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:21.033 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:21.290 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:21.290 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:21.290 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:21.290 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:21.290 15:18:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.855 15:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.855 15:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:21.855 15:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.855 15:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:21.855 15:18:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:23.750 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:23.750 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:23.750 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.750 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:23.750 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.750 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.751 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:24.008 15:18:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:24.940 15:18:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.873 ************************************ 00:07:25.873 START TEST filesystem_in_capsule_ext4 00:07:25.873 ************************************ 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:25.873 15:18:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:25.873 mke2fs 1.46.5 (30-Dec-2021) 00:07:25.873 Discarding device blocks: 0/522240 done 00:07:25.873 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.873 Filesystem UUID: 845bf3b6-5b02-4bcf-a911-41f48e61ed19 00:07:25.873 Superblock backups stored on blocks: 00:07:25.873 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.873 00:07:25.873 Allocating group tables: 0/64 done 00:07:25.873 Writing inode tables: 0/64 done 00:07:26.131 Creating journal (8192 blocks): done 00:07:26.954 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:26.954 00:07:26.954 15:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:26.954 15:18:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 993905 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.889 00:07:27.889 real 0m1.877s 00:07:27.889 user 0m0.021s 00:07:27.889 sys 0m0.056s 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:27.889 ************************************ 00:07:27.889 END TEST filesystem_in_capsule_ext4 00:07:27.889 ************************************ 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.889 ************************************ 00:07:27.889 START TEST filesystem_in_capsule_btrfs 00:07:27.889 ************************************ 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:27.889 btrfs-progs v6.6.2 00:07:27.889 See https://btrfs.readthedocs.io for more information. 00:07:27.889 00:07:27.889 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:27.889 NOTE: several default settings have changed in version 5.15, please make sure 00:07:27.889 this does not affect your deployments: 00:07:27.889 - DUP for metadata (-m dup) 00:07:27.889 - enabled no-holes (-O no-holes) 00:07:27.889 - enabled free-space-tree (-R free-space-tree) 00:07:27.889 00:07:27.889 Label: (null) 00:07:27.889 UUID: c2b235ae-155c-47ea-bf8d-486a44c5e027 00:07:27.889 Node size: 16384 00:07:27.889 Sector size: 4096 00:07:27.889 Filesystem size: 510.00MiB 00:07:27.889 Block group profiles: 00:07:27.889 Data: single 8.00MiB 00:07:27.889 Metadata: DUP 32.00MiB 00:07:27.889 System: DUP 8.00MiB 00:07:27.889 SSD detected: yes 00:07:27.889 Zoned device: no 00:07:27.889 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:27.889 Runtime features: free-space-tree 00:07:27.889 Checksum: crc32c 00:07:27.889 Number of devices: 1 00:07:27.889 Devices: 00:07:27.889 ID SIZE PATH 00:07:27.889 1 510.00MiB /dev/nvme0n1p1 00:07:27.889 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:27.889 15:18:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.455 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.455 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:28.455 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.455 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:28.455 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 993905 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.456 00:07:28.456 real 0m0.641s 00:07:28.456 user 0m0.022s 00:07:28.456 sys 0m0.114s 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.456 ************************************ 00:07:28.456 END TEST filesystem_in_capsule_btrfs 00:07:28.456 ************************************ 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.456 ************************************ 00:07:28.456 START TEST filesystem_in_capsule_xfs 00:07:28.456 ************************************ 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:28.456 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:28.714 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:28.714 = sectsz=512 attr=2, projid32bit=1 00:07:28.714 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:28.714 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:28.714 data = bsize=4096 blocks=130560, imaxpct=25 00:07:28.714 = sunit=0 swidth=0 blks 00:07:28.714 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:28.714 log =internal log bsize=4096 blocks=16384, version=2 00:07:28.714 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:28.714 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:29.279 Discarding blocks...Done. 00:07:29.279 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:29.279 15:18:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 993905 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.180 00:07:31.180 real 0m2.650s 00:07:31.180 user 0m0.018s 00:07:31.180 sys 0m0.056s 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:31.180 ************************************ 00:07:31.180 END TEST filesystem_in_capsule_xfs 00:07:31.180 ************************************ 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:31.180 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.439 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.439 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.439 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.439 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.439 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.439 15:19:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 993905 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 993905 ']' 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 993905 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 993905 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 993905' 00:07:31.439 killing process with pid 993905 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 993905 00:07:31.439 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 993905 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:32.006 00:07:32.006 real 0m11.322s 00:07:32.006 user 0m43.448s 00:07:32.006 sys 0m1.704s 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.006 ************************************ 00:07:32.006 END TEST nvmf_filesystem_in_capsule 00:07:32.006 ************************************ 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.006 rmmod nvme_tcp 00:07:32.006 rmmod nvme_fabrics 00:07:32.006 rmmod nvme_keyring 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.006 15:19:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.909 15:19:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.909 00:07:33.909 real 0m30.398s 00:07:33.909 user 1m40.706s 00:07:33.909 sys 0m5.307s 00:07:33.909 15:19:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.909 15:19:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.909 ************************************ 00:07:33.909 END TEST nvmf_filesystem 00:07:33.909 ************************************ 00:07:33.909 15:19:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.909 15:19:04 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.909 15:19:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.909 15:19:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.909 15:19:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.909 ************************************ 00:07:33.909 START TEST nvmf_target_discovery 00:07:33.909 ************************************ 00:07:33.909 15:19:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:34.167 * Looking for test storage... 00:07:34.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.167 15:19:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:36.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:36.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:36.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:36.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.068 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:07:36.069 00:07:36.069 --- 10.0.0.2 ping statistics --- 00:07:36.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.069 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:07:36.069 00:07:36.069 --- 10.0.0.1 ping statistics --- 00:07:36.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.069 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.069 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=997372 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 997372 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 997372 ']' 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.327 15:19:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.327 [2024-07-13 15:19:06.884359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:36.327 [2024-07-13 15:19:06.884438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.327 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.327 [2024-07-13 15:19:06.923405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.327 [2024-07-13 15:19:06.949926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.327 [2024-07-13 15:19:07.037961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.327 [2024-07-13 15:19:07.038020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.327 [2024-07-13 15:19:07.038034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.327 [2024-07-13 15:19:07.038045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.327 [2024-07-13 15:19:07.038054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.327 [2024-07-13 15:19:07.038107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.327 [2024-07-13 15:19:07.038165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.327 [2024-07-13 15:19:07.038231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.327 [2024-07-13 15:19:07.038233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 [2024-07-13 15:19:07.200707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 Null1 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 [2024-07-13 15:19:07.241016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 Null2 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 Null3 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 Null4 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.586 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.844 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.844 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:36.844 00:07:36.844 Discovery Log Number of Records 6, Generation counter 6 00:07:36.844 =====Discovery Log Entry 0====== 00:07:36.844 trtype: tcp 00:07:36.844 adrfam: ipv4 00:07:36.844 subtype: current discovery subsystem 00:07:36.844 treq: not required 00:07:36.844 portid: 0 00:07:36.844 trsvcid: 4420 00:07:36.844 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:36.844 traddr: 10.0.0.2 00:07:36.844 eflags: explicit discovery connections, duplicate discovery information 00:07:36.844 sectype: none 00:07:36.844 =====Discovery Log Entry 1====== 00:07:36.844 trtype: tcp 00:07:36.844 adrfam: ipv4 00:07:36.844 subtype: nvme subsystem 00:07:36.844 treq: not required 00:07:36.844 portid: 0 00:07:36.844 trsvcid: 4420 00:07:36.845 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:36.845 traddr: 10.0.0.2 00:07:36.845 eflags: none 00:07:36.845 sectype: none 00:07:36.845 =====Discovery Log Entry 2====== 00:07:36.845 trtype: tcp 00:07:36.845 adrfam: ipv4 00:07:36.845 subtype: nvme subsystem 00:07:36.845 treq: not required 00:07:36.845 portid: 0 00:07:36.845 trsvcid: 4420 00:07:36.845 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:36.845 traddr: 10.0.0.2 00:07:36.845 eflags: none 00:07:36.845 sectype: none 00:07:36.845 =====Discovery Log Entry 3====== 00:07:36.845 trtype: tcp 00:07:36.845 adrfam: ipv4 00:07:36.845 subtype: nvme subsystem 00:07:36.845 treq: not required 00:07:36.845 portid: 0 00:07:36.845 trsvcid: 4420 00:07:36.845 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:36.845 traddr: 10.0.0.2 00:07:36.845 eflags: none 00:07:36.845 sectype: none 00:07:36.845 =====Discovery Log Entry 4====== 00:07:36.845 trtype: tcp 00:07:36.845 adrfam: ipv4 00:07:36.845 subtype: nvme subsystem 00:07:36.845 treq: not required 00:07:36.845 portid: 0 00:07:36.845 trsvcid: 4420 00:07:36.845 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:36.845 traddr: 10.0.0.2 00:07:36.845 eflags: none 00:07:36.845 sectype: none 00:07:36.845 =====Discovery Log Entry 5====== 00:07:36.845 trtype: tcp 00:07:36.845 adrfam: ipv4 00:07:36.845 subtype: discovery subsystem referral 00:07:36.845 treq: not required 00:07:36.845 portid: 0 00:07:36.845 trsvcid: 4430 00:07:36.845 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:36.845 traddr: 10.0.0.2 00:07:36.845 eflags: none 00:07:36.845 sectype: none 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:36.845 Perform nvmf subsystem discovery via RPC 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.845 [ 00:07:36.845 { 00:07:36.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:36.845 "subtype": "Discovery", 00:07:36.845 "listen_addresses": [ 00:07:36.845 { 00:07:36.845 "trtype": "TCP", 00:07:36.845 "adrfam": "IPv4", 00:07:36.845 "traddr": "10.0.0.2", 00:07:36.845 "trsvcid": "4420" 00:07:36.845 } 00:07:36.845 ], 00:07:36.845 "allow_any_host": true, 00:07:36.845 "hosts": [] 00:07:36.845 }, 00:07:36.845 { 00:07:36.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:36.845 "subtype": "NVMe", 00:07:36.845 "listen_addresses": [ 00:07:36.845 { 00:07:36.845 "trtype": "TCP", 00:07:36.845 "adrfam": "IPv4", 00:07:36.845 "traddr": "10.0.0.2", 00:07:36.845 "trsvcid": "4420" 00:07:36.845 } 00:07:36.845 ], 00:07:36.845 "allow_any_host": true, 00:07:36.845 "hosts": [], 00:07:36.845 "serial_number": "SPDK00000000000001", 00:07:36.845 "model_number": "SPDK bdev Controller", 00:07:36.845 "max_namespaces": 32, 00:07:36.845 "min_cntlid": 1, 00:07:36.845 "max_cntlid": 65519, 00:07:36.845 "namespaces": [ 00:07:36.845 { 00:07:36.845 "nsid": 1, 00:07:36.845 "bdev_name": "Null1", 00:07:36.845 "name": "Null1", 00:07:36.845 "nguid": "8595A22DE92C4BE5B0BE72BE5F105D61", 00:07:36.845 "uuid": "8595a22d-e92c-4be5-b0be-72be5f105d61" 00:07:36.845 } 00:07:36.845 ] 00:07:36.845 }, 00:07:36.845 { 00:07:36.845 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:36.845 "subtype": "NVMe", 00:07:36.845 "listen_addresses": [ 00:07:36.845 { 00:07:36.845 "trtype": "TCP", 00:07:36.845 "adrfam": "IPv4", 00:07:36.845 "traddr": "10.0.0.2", 00:07:36.845 "trsvcid": "4420" 00:07:36.845 } 00:07:36.845 ], 00:07:36.845 "allow_any_host": true, 00:07:36.845 "hosts": [], 00:07:36.845 "serial_number": "SPDK00000000000002", 00:07:36.845 "model_number": "SPDK bdev Controller", 00:07:36.845 "max_namespaces": 32, 00:07:36.845 "min_cntlid": 1, 00:07:36.845 "max_cntlid": 65519, 00:07:36.845 "namespaces": [ 00:07:36.845 { 00:07:36.845 "nsid": 1, 00:07:36.845 "bdev_name": "Null2", 00:07:36.845 "name": "Null2", 00:07:36.845 "nguid": "D9A40B2FEEFF4ECAB981746DE3438BC7", 00:07:36.845 "uuid": "d9a40b2f-eeff-4eca-b981-746de3438bc7" 00:07:36.845 } 00:07:36.845 ] 00:07:36.845 }, 00:07:36.845 { 00:07:36.845 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:36.845 "subtype": "NVMe", 00:07:36.845 "listen_addresses": [ 00:07:36.845 { 00:07:36.845 "trtype": "TCP", 00:07:36.845 "adrfam": "IPv4", 00:07:36.845 "traddr": "10.0.0.2", 00:07:36.845 "trsvcid": "4420" 00:07:36.845 } 00:07:36.845 ], 00:07:36.845 "allow_any_host": true, 00:07:36.845 "hosts": [], 00:07:36.845 "serial_number": "SPDK00000000000003", 00:07:36.845 "model_number": "SPDK bdev Controller", 00:07:36.845 "max_namespaces": 32, 00:07:36.845 "min_cntlid": 1, 00:07:36.845 "max_cntlid": 65519, 00:07:36.845 "namespaces": [ 00:07:36.845 { 00:07:36.845 "nsid": 1, 00:07:36.845 "bdev_name": "Null3", 00:07:36.845 "name": "Null3", 00:07:36.845 "nguid": "81411882B36D417098783BD16EE074B7", 00:07:36.845 "uuid": "81411882-b36d-4170-9878-3bd16ee074b7" 00:07:36.845 } 00:07:36.845 ] 00:07:36.845 }, 00:07:36.845 { 00:07:36.845 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:36.845 "subtype": "NVMe", 00:07:36.845 "listen_addresses": [ 00:07:36.845 { 00:07:36.845 "trtype": "TCP", 00:07:36.845 "adrfam": "IPv4", 00:07:36.845 "traddr": "10.0.0.2", 00:07:36.845 "trsvcid": "4420" 00:07:36.845 } 00:07:36.845 ], 00:07:36.845 "allow_any_host": true, 00:07:36.845 "hosts": [], 00:07:36.845 "serial_number": "SPDK00000000000004", 00:07:36.845 "model_number": "SPDK bdev Controller", 00:07:36.845 "max_namespaces": 32, 00:07:36.845 "min_cntlid": 1, 00:07:36.845 "max_cntlid": 65519, 00:07:36.845 "namespaces": [ 00:07:36.845 { 00:07:36.845 "nsid": 1, 00:07:36.845 "bdev_name": "Null4", 00:07:36.845 "name": "Null4", 00:07:36.845 "nguid": "8C5322AD1A49496CA28F881F61BE29AC", 00:07:36.845 "uuid": "8c5322ad-1a49-496c-a28f-881f61be29ac" 00:07:36.845 } 00:07:36.845 ] 00:07:36.845 } 00:07:36.845 ] 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.845 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.846 rmmod nvme_tcp 00:07:36.846 rmmod nvme_fabrics 00:07:36.846 rmmod nvme_keyring 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 997372 ']' 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 997372 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 997372 ']' 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 997372 00:07:36.846 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 997372 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 997372' 00:07:37.104 killing process with pid 997372 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 997372 00:07:37.104 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 997372 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.362 15:19:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.266 15:19:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.266 00:07:39.266 real 0m5.248s 00:07:39.266 user 0m4.006s 00:07:39.266 sys 0m1.815s 00:07:39.266 15:19:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.266 15:19:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.266 ************************************ 00:07:39.266 END TEST nvmf_target_discovery 00:07:39.266 ************************************ 00:07:39.266 15:19:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:39.266 15:19:09 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:39.266 15:19:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.266 15:19:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.266 15:19:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.266 ************************************ 00:07:39.266 START TEST nvmf_referrals 00:07:39.266 ************************************ 00:07:39.266 15:19:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:39.266 * Looking for test storage... 00:07:39.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.266 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.267 15:19:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.167 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:07:41.459 00:07:41.459 --- 10.0.0.2 ping statistics --- 00:07:41.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.459 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:07:41.459 15:19:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:41.459 00:07:41.459 --- 10.0.0.1 ping statistics --- 00:07:41.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.459 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=999345 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 999345 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 999345 ']' 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.459 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.459 [2024-07-13 15:19:12.083262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:41.459 [2024-07-13 15:19:12.083344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.459 [2024-07-13 15:19:12.121707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.459 [2024-07-13 15:19:12.154026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.718 [2024-07-13 15:19:12.247072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.718 [2024-07-13 15:19:12.247116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.718 [2024-07-13 15:19:12.247138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.718 [2024-07-13 15:19:12.247183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.718 [2024-07-13 15:19:12.247201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.718 [2024-07-13 15:19:12.247314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.718 [2024-07-13 15:19:12.247380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.718 [2024-07-13 15:19:12.247426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.718 [2024-07-13 15:19:12.247433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 [2024-07-13 15:19:12.386534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 [2024-07-13 15:19:12.398724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.718 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.976 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.242 15:19:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.499 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.757 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:43.014 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:43.272 15:19:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.272 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.272 rmmod nvme_tcp 00:07:43.529 rmmod nvme_fabrics 00:07:43.529 rmmod nvme_keyring 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 999345 ']' 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 999345 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 999345 ']' 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 999345 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 999345 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 999345' 00:07:43.529 killing process with pid 999345 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 999345 00:07:43.529 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 999345 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.788 15:19:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.689 15:19:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.689 00:07:45.689 real 0m6.426s 00:07:45.689 user 0m9.625s 00:07:45.689 sys 0m2.034s 00:07:45.689 15:19:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.689 15:19:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.689 ************************************ 00:07:45.689 END TEST nvmf_referrals 00:07:45.689 ************************************ 00:07:45.689 15:19:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.689 15:19:16 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:45.689 15:19:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.689 15:19:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.690 15:19:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.690 ************************************ 00:07:45.690 START TEST nvmf_connect_disconnect 00:07:45.690 ************************************ 00:07:45.690 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:45.947 * Looking for test storage... 00:07:45.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.947 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.947 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:45.947 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.948 15:19:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:47.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:47.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:47.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.850 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:47.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:47.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:07:47.851 00:07:47.851 --- 10.0.0.2 ping statistics --- 00:07:47.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.851 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:47.851 00:07:47.851 --- 10.0.0.1 ping statistics --- 00:07:47.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.851 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1001636 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1001636 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1001636 ']' 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.851 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.109 [2024-07-13 15:19:18.661104] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:07:48.109 [2024-07-13 15:19:18.661206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.109 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.109 [2024-07-13 15:19:18.700534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.109 [2024-07-13 15:19:18.726950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.109 [2024-07-13 15:19:18.816361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.109 [2024-07-13 15:19:18.816436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.109 [2024-07-13 15:19:18.816456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.109 [2024-07-13 15:19:18.816472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.109 [2024-07-13 15:19:18.816486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.109 [2024-07-13 15:19:18.816575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.109 [2024-07-13 15:19:18.816641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.109 [2024-07-13 15:19:18.816707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.109 [2024-07-13 15:19:18.816714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 [2024-07-13 15:19:18.973675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.369 15:19:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:48.369 [2024-07-13 15:19:19.034924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:48.369 15:19:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:50.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:40.609 rmmod nvme_tcp 00:11:40.609 rmmod nvme_fabrics 00:11:40.609 rmmod nvme_keyring 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1001636 ']' 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1001636 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1001636 ']' 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1001636 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1001636 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1001636' 00:11:40.609 killing process with pid 1001636 00:11:40.609 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1001636 00:11:40.610 15:23:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1001636 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.610 15:23:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.514 15:23:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:42.514 00:11:42.514 real 3m56.801s 00:11:42.514 user 15m0.226s 00:11:42.514 sys 0m36.564s 00:11:42.514 15:23:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.514 15:23:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:42.514 ************************************ 00:11:42.514 END TEST nvmf_connect_disconnect 00:11:42.514 ************************************ 00:11:42.514 15:23:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:42.514 15:23:13 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:42.514 15:23:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:42.514 15:23:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.514 15:23:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.774 ************************************ 00:11:42.774 START TEST nvmf_multitarget 00:11:42.774 ************************************ 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:42.774 * Looking for test storage... 00:11:42.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.774 15:23:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:44.711 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:44.711 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:44.711 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:44.711 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.711 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:11:44.968 00:11:44.968 --- 10.0.0.2 ping statistics --- 00:11:44.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.968 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:44.968 00:11:44.968 --- 10.0.0.1 ping statistics --- 00:11:44.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.968 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:44.968 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1032823 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1032823 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1032823 ']' 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.969 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.969 [2024-07-13 15:23:15.629543] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:44.969 [2024-07-13 15:23:15.629629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.969 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.969 [2024-07-13 15:23:15.671806] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:44.969 [2024-07-13 15:23:15.702806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.225 [2024-07-13 15:23:15.796250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.225 [2024-07-13 15:23:15.796303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.225 [2024-07-13 15:23:15.796338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.225 [2024-07-13 15:23:15.796360] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.225 [2024-07-13 15:23:15.796379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.225 [2024-07-13 15:23:15.796454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.225 [2024-07-13 15:23:15.796510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.225 [2024-07-13 15:23:15.796644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.225 [2024-07-13 15:23:15.796652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.225 15:23:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:45.481 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:45.481 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:45.481 "nvmf_tgt_1" 00:11:45.481 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:45.738 "nvmf_tgt_2" 00:11:45.738 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.738 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:45.738 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:45.738 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:45.996 true 00:11:45.996 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:45.996 true 00:11:45.996 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.996 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.254 rmmod nvme_tcp 00:11:46.254 rmmod nvme_fabrics 00:11:46.254 rmmod nvme_keyring 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1032823 ']' 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1032823 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1032823 ']' 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1032823 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1032823 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1032823' 00:11:46.254 killing process with pid 1032823 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1032823 00:11:46.254 15:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1032823 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.511 15:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.412 15:23:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.412 00:11:48.412 real 0m5.838s 00:11:48.412 user 0m6.590s 00:11:48.412 sys 0m2.004s 00:11:48.412 15:23:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.412 15:23:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.412 ************************************ 00:11:48.412 END TEST nvmf_multitarget 00:11:48.412 ************************************ 00:11:48.412 15:23:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.412 15:23:19 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:48.412 15:23:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.412 15:23:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.412 15:23:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.412 ************************************ 00:11:48.412 START TEST nvmf_rpc 00:11:48.412 ************************************ 00:11:48.412 15:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:48.671 * Looking for test storage... 00:11:48.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.671 15:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.573 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.574 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.574 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:11:50.574 00:11:50.574 --- 10.0.0.2 ping statistics --- 00:11:50.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.574 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:11:50.574 00:11:50.574 --- 10.0.0.1 ping statistics --- 00:11:50.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.574 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.574 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1034920 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1034920 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1034920 ']' 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.834 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 [2024-07-13 15:23:21.399022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:11:50.834 [2024-07-13 15:23:21.399095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.834 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.834 [2024-07-13 15:23:21.439819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.834 [2024-07-13 15:23:21.472256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.834 [2024-07-13 15:23:21.572318] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.834 [2024-07-13 15:23:21.572390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.834 [2024-07-13 15:23:21.572415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.834 [2024-07-13 15:23:21.572434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.834 [2024-07-13 15:23:21.572453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.834 [2024-07-13 15:23:21.572525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.834 [2024-07-13 15:23:21.572561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.834 [2024-07-13 15:23:21.572620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.834 [2024-07-13 15:23:21.572627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:51.093 "tick_rate": 2700000000, 00:11:51.093 "poll_groups": [ 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_000", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [] 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_001", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [] 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_002", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [] 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_003", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [] 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 }' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 [2024-07-13 15:23:21.830122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:51.093 "tick_rate": 2700000000, 00:11:51.093 "poll_groups": [ 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_000", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [ 00:11:51.093 { 00:11:51.093 "trtype": "TCP" 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_001", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [ 00:11:51.093 { 00:11:51.093 "trtype": "TCP" 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_002", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [ 00:11:51.093 { 00:11:51.093 "trtype": "TCP" 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 }, 00:11:51.093 { 00:11:51.093 "name": "nvmf_tgt_poll_group_003", 00:11:51.093 "admin_qpairs": 0, 00:11:51.093 "io_qpairs": 0, 00:11:51.093 "current_admin_qpairs": 0, 00:11:51.093 "current_io_qpairs": 0, 00:11:51.093 "pending_bdev_io": 0, 00:11:51.093 "completed_nvme_io": 0, 00:11:51.093 "transports": [ 00:11:51.093 { 00:11:51.093 "trtype": "TCP" 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 } 00:11:51.093 ] 00:11:51.093 }' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:51.093 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.351 Malloc1 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.351 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.352 [2024-07-13 15:23:21.991352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:51.352 15:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:51.352 [2024-07-13 15:23:22.013901] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:51.352 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:51.352 could not add new controller: failed to write to nvme-fabrics device 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.352 15:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.285 15:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.285 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.285 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.285 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.285 15:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.187 [2024-07-13 15:23:24.852995] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:54.187 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:54.187 could not add new controller: failed to write to nvme-fabrics device 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.187 15:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.119 15:23:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.119 15:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.119 15:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.119 15:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:55.119 15:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.024 [2024-07-13 15:23:27.697668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.024 15:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.589 15:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.589 15:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.589 15:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.589 15:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.589 15:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 [2024-07-13 15:23:30.476812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.125 15:23:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.382 15:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.382 15:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.382 15:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.382 15:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:00.382 15:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.942 [2024-07-13 15:23:33.253768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.942 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.200 15:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.200 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.200 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.200 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:03.200 15:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 [2024-07-13 15:23:35.996078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.730 15:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.730 15:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.730 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.730 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.730 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.730 15:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.989 15:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.989 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.989 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.989 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:05.989 15:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:07.893 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 [2024-07-13 15:23:38.798978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.154 15:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.721 15:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.722 15:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.722 15:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.722 15:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.722 15:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 [2024-07-13 15:23:41.540072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 [2024-07-13 15:23:41.588132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 [2024-07-13 15:23:41.636302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.255 [2024-07-13 15:23:41.684483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.255 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 [2024-07-13 15:23:41.732642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:11.256 "tick_rate": 2700000000, 00:12:11.256 "poll_groups": [ 00:12:11.256 { 00:12:11.256 "name": "nvmf_tgt_poll_group_000", 00:12:11.256 "admin_qpairs": 2, 00:12:11.256 "io_qpairs": 84, 00:12:11.256 "current_admin_qpairs": 0, 00:12:11.256 "current_io_qpairs": 0, 00:12:11.256 "pending_bdev_io": 0, 00:12:11.256 "completed_nvme_io": 184, 00:12:11.256 "transports": [ 00:12:11.256 { 00:12:11.256 "trtype": "TCP" 00:12:11.256 } 00:12:11.256 ] 00:12:11.256 }, 00:12:11.256 { 00:12:11.256 "name": "nvmf_tgt_poll_group_001", 00:12:11.256 "admin_qpairs": 2, 00:12:11.256 "io_qpairs": 84, 00:12:11.256 "current_admin_qpairs": 0, 00:12:11.256 "current_io_qpairs": 0, 00:12:11.256 "pending_bdev_io": 0, 00:12:11.256 "completed_nvme_io": 183, 00:12:11.256 "transports": [ 00:12:11.256 { 00:12:11.256 "trtype": "TCP" 00:12:11.256 } 00:12:11.256 ] 00:12:11.256 }, 00:12:11.256 { 00:12:11.256 "name": "nvmf_tgt_poll_group_002", 00:12:11.256 "admin_qpairs": 1, 00:12:11.256 "io_qpairs": 84, 00:12:11.256 "current_admin_qpairs": 0, 00:12:11.256 "current_io_qpairs": 0, 00:12:11.256 "pending_bdev_io": 0, 00:12:11.256 "completed_nvme_io": 203, 00:12:11.256 "transports": [ 00:12:11.256 { 00:12:11.256 "trtype": "TCP" 00:12:11.256 } 00:12:11.256 ] 00:12:11.256 }, 00:12:11.256 { 00:12:11.256 "name": "nvmf_tgt_poll_group_003", 00:12:11.256 "admin_qpairs": 2, 00:12:11.256 "io_qpairs": 84, 00:12:11.256 "current_admin_qpairs": 0, 00:12:11.256 "current_io_qpairs": 0, 00:12:11.256 "pending_bdev_io": 0, 00:12:11.256 "completed_nvme_io": 116, 00:12:11.256 "transports": [ 00:12:11.256 { 00:12:11.256 "trtype": "TCP" 00:12:11.256 } 00:12:11.256 ] 00:12:11.256 } 00:12:11.256 ] 00:12:11.256 }' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.256 rmmod nvme_tcp 00:12:11.256 rmmod nvme_fabrics 00:12:11.256 rmmod nvme_keyring 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1034920 ']' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1034920 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1034920 ']' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1034920 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1034920 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1034920' 00:12:11.256 killing process with pid 1034920 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1034920 00:12:11.256 15:23:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1034920 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.516 15:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.051 15:23:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.051 00:12:14.051 real 0m25.073s 00:12:14.051 user 1m21.653s 00:12:14.051 sys 0m4.036s 00:12:14.051 15:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.051 15:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.051 ************************************ 00:12:14.051 END TEST nvmf_rpc 00:12:14.051 ************************************ 00:12:14.051 15:23:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:14.051 15:23:44 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:14.051 15:23:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.051 15:23:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.051 15:23:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.051 ************************************ 00:12:14.051 START TEST nvmf_invalid 00:12:14.051 ************************************ 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:14.051 * Looking for test storage... 00:12:14.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.051 15:23:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.432 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.432 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.433 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.433 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:12:15.692 00:12:15.692 --- 10.0.0.2 ping statistics --- 00:12:15.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.692 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:15.692 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:12:15.692 00:12:15.692 --- 10.0.0.1 ping statistics --- 00:12:15.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.692 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1039418 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1039418 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1039418 ']' 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.693 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 [2024-07-13 15:23:46.367903] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:15.693 [2024-07-13 15:23:46.367975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.693 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.693 [2024-07-13 15:23:46.403921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:15.693 [2024-07-13 15:23:46.431625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.952 [2024-07-13 15:23:46.522616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.952 [2024-07-13 15:23:46.522685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.952 [2024-07-13 15:23:46.522707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.952 [2024-07-13 15:23:46.522723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.952 [2024-07-13 15:23:46.522737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.952 [2024-07-13 15:23:46.522827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.952 [2024-07-13 15:23:46.522893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.952 [2024-07-13 15:23:46.522959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.952 [2024-07-13 15:23:46.522966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.952 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode197 00:12:16.210 [2024-07-13 15:23:46.901388] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:16.210 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:16.210 { 00:12:16.210 "nqn": "nqn.2016-06.io.spdk:cnode197", 00:12:16.210 "tgt_name": "foobar", 00:12:16.210 "method": "nvmf_create_subsystem", 00:12:16.210 "req_id": 1 00:12:16.210 } 00:12:16.210 Got JSON-RPC error response 00:12:16.210 response: 00:12:16.210 { 00:12:16.210 "code": -32603, 00:12:16.210 "message": "Unable to find target foobar" 00:12:16.210 }' 00:12:16.210 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:16.210 { 00:12:16.210 "nqn": "nqn.2016-06.io.spdk:cnode197", 00:12:16.210 "tgt_name": "foobar", 00:12:16.210 "method": "nvmf_create_subsystem", 00:12:16.210 "req_id": 1 00:12:16.210 } 00:12:16.210 Got JSON-RPC error response 00:12:16.210 response: 00:12:16.210 { 00:12:16.210 "code": -32603, 00:12:16.210 "message": "Unable to find target foobar" 00:12:16.210 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:16.210 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:16.210 15:23:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23247 00:12:16.468 [2024-07-13 15:23:47.158297] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23247: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:16.468 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:16.468 { 00:12:16.468 "nqn": "nqn.2016-06.io.spdk:cnode23247", 00:12:16.468 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:16.468 "method": "nvmf_create_subsystem", 00:12:16.468 "req_id": 1 00:12:16.468 } 00:12:16.468 Got JSON-RPC error response 00:12:16.468 response: 00:12:16.468 { 00:12:16.468 "code": -32602, 00:12:16.468 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:16.468 }' 00:12:16.468 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:16.468 { 00:12:16.468 "nqn": "nqn.2016-06.io.spdk:cnode23247", 00:12:16.468 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:16.468 "method": "nvmf_create_subsystem", 00:12:16.468 "req_id": 1 00:12:16.468 } 00:12:16.468 Got JSON-RPC error response 00:12:16.468 response: 00:12:16.468 { 00:12:16.468 "code": -32602, 00:12:16.468 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:16.468 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:16.468 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:16.468 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25511 00:12:16.727 [2024-07-13 15:23:47.471245] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25511: invalid model number 'SPDK_Controller' 00:12:16.727 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:16.727 { 00:12:16.727 "nqn": "nqn.2016-06.io.spdk:cnode25511", 00:12:16.727 "model_number": "SPDK_Controller\u001f", 00:12:16.727 "method": "nvmf_create_subsystem", 00:12:16.727 "req_id": 1 00:12:16.727 } 00:12:16.727 Got JSON-RPC error response 00:12:16.727 response: 00:12:16.727 { 00:12:16.727 "code": -32602, 00:12:16.727 "message": "Invalid MN SPDK_Controller\u001f" 00:12:16.727 }' 00:12:16.727 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:16.727 { 00:12:16.727 "nqn": "nqn.2016-06.io.spdk:cnode25511", 00:12:16.727 "model_number": "SPDK_Controller\u001f", 00:12:16.727 "method": "nvmf_create_subsystem", 00:12:16.727 "req_id": 1 00:12:16.727 } 00:12:16.727 Got JSON-RPC error response 00:12:16.727 response: 00:12:16.727 { 00:12:16.727 "code": -32602, 00:12:16.727 "message": "Invalid MN SPDK_Controller\u001f" 00:12:16.727 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.985 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Pv%.1IW>.TYA{:&tn4{yc' 00:12:16.986 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Pv%.1IW>.TYA{:&tn4{yc' nqn.2016-06.io.spdk:cnode21124 00:12:17.243 [2024-07-13 15:23:47.772277] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21124: invalid serial number 'Pv%.1IW>.TYA{:&tn4{yc' 00:12:17.243 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:17.243 { 00:12:17.243 "nqn": "nqn.2016-06.io.spdk:cnode21124", 00:12:17.243 "serial_number": "Pv%.1IW>.TYA{:&tn4{yc", 00:12:17.243 "method": "nvmf_create_subsystem", 00:12:17.243 "req_id": 1 00:12:17.243 } 00:12:17.243 Got JSON-RPC error response 00:12:17.243 response: 00:12:17.243 { 00:12:17.243 "code": -32602, 00:12:17.243 "message": "Invalid SN Pv%.1IW>.TYA{:&tn4{yc" 00:12:17.243 }' 00:12:17.243 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:17.243 { 00:12:17.243 "nqn": "nqn.2016-06.io.spdk:cnode21124", 00:12:17.243 "serial_number": "Pv%.1IW>.TYA{:&tn4{yc", 00:12:17.243 "method": "nvmf_create_subsystem", 00:12:17.243 "req_id": 1 00:12:17.243 } 00:12:17.243 Got JSON-RPC error response 00:12:17.243 response: 00:12:17.243 { 00:12:17.243 "code": -32602, 00:12:17.244 "message": "Invalid SN Pv%.1IW>.TYA{:&tn4{yc" 00:12:17.244 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:17.244 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@29 -- # string='\-[,4CO!d=h@5,]WEV+!@`z' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-[,4CO!d=h@5,]WEV+!@`z' 00:12:17.245 15:23:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '\-[,4CO!d=h@5,]WEV+!@`z' nqn.2016-06.io.spdk:cnode17503 00:12:17.503 [2024-07-13 15:23:48.153569] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17503: invalid model number '\-[,4CO!d=h@5,]WEV+!@`z' 00:12:17.503 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:17.503 { 00:12:17.503 "nqn": "nqn.2016-06.io.spdk:cnode17503", 00:12:17.503 "model_number": "\\-[,4CO!d=h@5,]WEV+!@`z", 00:12:17.503 "method": "nvmf_create_subsystem", 00:12:17.503 "req_id": 1 00:12:17.503 } 00:12:17.503 Got JSON-RPC error response 00:12:17.503 response: 00:12:17.503 { 00:12:17.503 "code": -32602, 00:12:17.503 "message": "Invalid MN \\-[,4CO!d=h@5,]WEV+!@`z" 00:12:17.503 }' 00:12:17.503 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:17.503 { 00:12:17.503 "nqn": "nqn.2016-06.io.spdk:cnode17503", 00:12:17.503 "model_number": "\\-[,4CO!d=h@5,]WEV+!@`z", 00:12:17.503 "method": "nvmf_create_subsystem", 00:12:17.503 "req_id": 1 00:12:17.503 } 00:12:17.503 Got JSON-RPC error response 00:12:17.503 response: 00:12:17.503 { 00:12:17.503 "code": -32602, 00:12:17.503 "message": "Invalid MN \\-[,4CO!d=h@5,]WEV+!@`z" 00:12:17.503 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:17.503 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:17.762 [2024-07-13 15:23:48.406505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.762 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:18.020 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:18.020 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:18.020 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:18.020 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:18.020 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:18.278 [2024-07-13 15:23:48.908225] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:18.278 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:18.278 { 00:12:18.278 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:18.278 "listen_address": { 00:12:18.278 "trtype": "tcp", 00:12:18.278 "traddr": "", 00:12:18.278 "trsvcid": "4421" 00:12:18.278 }, 00:12:18.278 "method": "nvmf_subsystem_remove_listener", 00:12:18.278 "req_id": 1 00:12:18.278 } 00:12:18.278 Got JSON-RPC error response 00:12:18.278 response: 00:12:18.278 { 00:12:18.278 "code": -32602, 00:12:18.278 "message": "Invalid parameters" 00:12:18.278 }' 00:12:18.278 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:18.278 { 00:12:18.278 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:18.278 "listen_address": { 00:12:18.278 "trtype": "tcp", 00:12:18.278 "traddr": "", 00:12:18.278 "trsvcid": "4421" 00:12:18.278 }, 00:12:18.278 "method": "nvmf_subsystem_remove_listener", 00:12:18.278 "req_id": 1 00:12:18.278 } 00:12:18.278 Got JSON-RPC error response 00:12:18.278 response: 00:12:18.278 { 00:12:18.278 "code": -32602, 00:12:18.278 "message": "Invalid parameters" 00:12:18.278 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:18.278 15:23:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20619 -i 0 00:12:18.536 [2024-07-13 15:23:49.160997] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20619: invalid cntlid range [0-65519] 00:12:18.536 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:18.536 { 00:12:18.536 "nqn": "nqn.2016-06.io.spdk:cnode20619", 00:12:18.536 "min_cntlid": 0, 00:12:18.536 "method": "nvmf_create_subsystem", 00:12:18.536 "req_id": 1 00:12:18.536 } 00:12:18.536 Got JSON-RPC error response 00:12:18.536 response: 00:12:18.536 { 00:12:18.536 "code": -32602, 00:12:18.536 "message": "Invalid cntlid range [0-65519]" 00:12:18.536 }' 00:12:18.536 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:18.536 { 00:12:18.536 "nqn": "nqn.2016-06.io.spdk:cnode20619", 00:12:18.536 "min_cntlid": 0, 00:12:18.536 "method": "nvmf_create_subsystem", 00:12:18.536 "req_id": 1 00:12:18.536 } 00:12:18.536 Got JSON-RPC error response 00:12:18.536 response: 00:12:18.536 { 00:12:18.536 "code": -32602, 00:12:18.536 "message": "Invalid cntlid range [0-65519]" 00:12:18.536 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.536 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18650 -i 65520 00:12:18.793 [2024-07-13 15:23:49.421861] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18650: invalid cntlid range [65520-65519] 00:12:18.793 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:18.793 { 00:12:18.793 "nqn": "nqn.2016-06.io.spdk:cnode18650", 00:12:18.793 "min_cntlid": 65520, 00:12:18.793 "method": "nvmf_create_subsystem", 00:12:18.793 "req_id": 1 00:12:18.793 } 00:12:18.793 Got JSON-RPC error response 00:12:18.793 response: 00:12:18.793 { 00:12:18.793 "code": -32602, 00:12:18.793 "message": "Invalid cntlid range [65520-65519]" 00:12:18.793 }' 00:12:18.793 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:18.793 { 00:12:18.793 "nqn": "nqn.2016-06.io.spdk:cnode18650", 00:12:18.793 "min_cntlid": 65520, 00:12:18.793 "method": "nvmf_create_subsystem", 00:12:18.793 "req_id": 1 00:12:18.793 } 00:12:18.793 Got JSON-RPC error response 00:12:18.793 response: 00:12:18.793 { 00:12:18.793 "code": -32602, 00:12:18.793 "message": "Invalid cntlid range [65520-65519]" 00:12:18.793 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:18.793 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8996 -I 0 00:12:19.050 [2024-07-13 15:23:49.666650] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8996: invalid cntlid range [1-0] 00:12:19.050 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:19.050 { 00:12:19.050 "nqn": "nqn.2016-06.io.spdk:cnode8996", 00:12:19.050 "max_cntlid": 0, 00:12:19.050 "method": "nvmf_create_subsystem", 00:12:19.050 "req_id": 1 00:12:19.050 } 00:12:19.050 Got JSON-RPC error response 00:12:19.050 response: 00:12:19.050 { 00:12:19.050 "code": -32602, 00:12:19.050 "message": "Invalid cntlid range [1-0]" 00:12:19.050 }' 00:12:19.050 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:19.050 { 00:12:19.050 "nqn": "nqn.2016-06.io.spdk:cnode8996", 00:12:19.050 "max_cntlid": 0, 00:12:19.050 "method": "nvmf_create_subsystem", 00:12:19.050 "req_id": 1 00:12:19.050 } 00:12:19.050 Got JSON-RPC error response 00:12:19.050 response: 00:12:19.050 { 00:12:19.050 "code": -32602, 00:12:19.050 "message": "Invalid cntlid range [1-0]" 00:12:19.050 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:19.050 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode219 -I 65520 00:12:19.308 [2024-07-13 15:23:49.911470] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode219: invalid cntlid range [1-65520] 00:12:19.308 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:19.308 { 00:12:19.308 "nqn": "nqn.2016-06.io.spdk:cnode219", 00:12:19.308 "max_cntlid": 65520, 00:12:19.308 "method": "nvmf_create_subsystem", 00:12:19.308 "req_id": 1 00:12:19.308 } 00:12:19.308 Got JSON-RPC error response 00:12:19.308 response: 00:12:19.308 { 00:12:19.308 "code": -32602, 00:12:19.308 "message": "Invalid cntlid range [1-65520]" 00:12:19.308 }' 00:12:19.308 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:19.308 { 00:12:19.308 "nqn": "nqn.2016-06.io.spdk:cnode219", 00:12:19.308 "max_cntlid": 65520, 00:12:19.308 "method": "nvmf_create_subsystem", 00:12:19.308 "req_id": 1 00:12:19.308 } 00:12:19.308 Got JSON-RPC error response 00:12:19.308 response: 00:12:19.308 { 00:12:19.308 "code": -32602, 00:12:19.308 "message": "Invalid cntlid range [1-65520]" 00:12:19.308 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:19.308 15:23:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24588 -i 6 -I 5 00:12:19.566 [2024-07-13 15:23:50.164348] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24588: invalid cntlid range [6-5] 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:19.566 { 00:12:19.566 "nqn": "nqn.2016-06.io.spdk:cnode24588", 00:12:19.566 "min_cntlid": 6, 00:12:19.566 "max_cntlid": 5, 00:12:19.566 "method": "nvmf_create_subsystem", 00:12:19.566 "req_id": 1 00:12:19.566 } 00:12:19.566 Got JSON-RPC error response 00:12:19.566 response: 00:12:19.566 { 00:12:19.566 "code": -32602, 00:12:19.566 "message": "Invalid cntlid range [6-5]" 00:12:19.566 }' 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:19.566 { 00:12:19.566 "nqn": "nqn.2016-06.io.spdk:cnode24588", 00:12:19.566 "min_cntlid": 6, 00:12:19.566 "max_cntlid": 5, 00:12:19.566 "method": "nvmf_create_subsystem", 00:12:19.566 "req_id": 1 00:12:19.566 } 00:12:19.566 Got JSON-RPC error response 00:12:19.566 response: 00:12:19.566 { 00:12:19.566 "code": -32602, 00:12:19.566 "message": "Invalid cntlid range [6-5]" 00:12:19.566 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:19.566 { 00:12:19.566 "name": "foobar", 00:12:19.566 "method": "nvmf_delete_target", 00:12:19.566 "req_id": 1 00:12:19.566 } 00:12:19.566 Got JSON-RPC error response 00:12:19.566 response: 00:12:19.566 { 00:12:19.566 "code": -32602, 00:12:19.566 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:19.566 }' 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:19.566 { 00:12:19.566 "name": "foobar", 00:12:19.566 "method": "nvmf_delete_target", 00:12:19.566 "req_id": 1 00:12:19.566 } 00:12:19.566 Got JSON-RPC error response 00:12:19.566 response: 00:12:19.566 { 00:12:19.566 "code": -32602, 00:12:19.566 "message": "The specified target doesn't exist, cannot delete it." 00:12:19.566 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.566 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.566 rmmod nvme_tcp 00:12:19.566 rmmod nvme_fabrics 00:12:19.855 rmmod nvme_keyring 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1039418 ']' 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1039418 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1039418 ']' 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1039418 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1039418 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1039418' 00:12:19.855 killing process with pid 1039418 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1039418 00:12:19.855 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1039418 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.115 15:23:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.019 15:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.019 00:12:22.019 real 0m8.371s 00:12:22.019 user 0m19.922s 00:12:22.019 sys 0m2.227s 00:12:22.019 15:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.019 15:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:22.019 ************************************ 00:12:22.019 END TEST nvmf_invalid 00:12:22.019 ************************************ 00:12:22.019 15:23:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:22.019 15:23:52 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:22.019 15:23:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.019 15:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.019 15:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.019 ************************************ 00:12:22.019 START TEST nvmf_abort 00:12:22.019 ************************************ 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:22.019 * Looking for test storage... 00:12:22.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.019 15:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.278 15:23:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:24.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:24.195 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:24.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:24.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:24.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:12:24.196 00:12:24.196 --- 10.0.0.2 ping statistics --- 00:12:24.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.196 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:24.196 00:12:24.196 --- 10.0.0.1 ping statistics --- 00:12:24.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.196 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:24.196 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1041938 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1041938 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1041938 ']' 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.455 15:23:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.455 [2024-07-13 15:23:55.033972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:24.455 [2024-07-13 15:23:55.034050] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.455 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.455 [2024-07-13 15:23:55.072321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:24.455 [2024-07-13 15:23:55.105192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.455 [2024-07-13 15:23:55.200098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.455 [2024-07-13 15:23:55.200163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.455 [2024-07-13 15:23:55.200181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.455 [2024-07-13 15:23:55.200202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.455 [2024-07-13 15:23:55.200214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.455 [2024-07-13 15:23:55.200297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.455 [2024-07-13 15:23:55.200367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.455 [2024-07-13 15:23:55.200370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 [2024-07-13 15:23:55.356180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 Malloc0 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 Delay0 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 [2024-07-13 15:23:55.426167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.714 15:23:55 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:24.714 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.971 [2024-07-13 15:23:55.574030] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:27.509 Initializing NVMe Controllers 00:12:27.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:27.509 controller IO queue size 128 less than required 00:12:27.509 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:27.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:27.509 Initialization complete. Launching workers. 00:12:27.509 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33587 00:12:27.509 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33652, failed to submit 62 00:12:27.509 success 33591, unsuccess 61, failed 0 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.509 rmmod nvme_tcp 00:12:27.509 rmmod nvme_fabrics 00:12:27.509 rmmod nvme_keyring 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1041938 ']' 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1041938 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1041938 ']' 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1041938 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041938 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041938' 00:12:27.509 killing process with pid 1041938 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1041938 00:12:27.509 15:23:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1041938 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.509 15:23:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.418 15:24:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.418 00:12:29.418 real 0m7.375s 00:12:29.418 user 0m10.829s 00:12:29.418 sys 0m2.548s 00:12:29.418 15:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.418 15:24:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:29.418 ************************************ 00:12:29.418 END TEST nvmf_abort 00:12:29.418 ************************************ 00:12:29.418 15:24:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.418 15:24:00 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:29.418 15:24:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.418 15:24:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.418 15:24:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.418 ************************************ 00:12:29.418 START TEST nvmf_ns_hotplug_stress 00:12:29.418 ************************************ 00:12:29.418 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:29.418 * Looking for test storage... 00:12:29.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.418 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.677 15:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:12:31.584 00:12:31.584 --- 10.0.0.2 ping statistics --- 00:12:31.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.584 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:12:31.584 00:12:31.584 --- 10.0.0.1 ping statistics --- 00:12:31.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.584 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1044383 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1044383 00:12:31.584 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1044383 ']' 00:12:31.585 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.585 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.585 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.585 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.585 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.585 [2024-07-13 15:24:02.280974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:12:31.585 [2024-07-13 15:24:02.281068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.585 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.585 [2024-07-13 15:24:02.329026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:31.844 [2024-07-13 15:24:02.360342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.844 [2024-07-13 15:24:02.459119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.844 [2024-07-13 15:24:02.459187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.844 [2024-07-13 15:24:02.459203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.844 [2024-07-13 15:24:02.459217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.844 [2024-07-13 15:24:02.459228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.844 [2024-07-13 15:24:02.459283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.844 [2024-07-13 15:24:02.462891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.844 [2024-07-13 15:24:02.462896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:31.844 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:32.103 [2024-07-13 15:24:02.849777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.363 15:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:32.621 15:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.621 [2024-07-13 15:24:03.352627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.621 15:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.187 15:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:33.447 Malloc0 00:12:33.447 15:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:33.705 Delay0 00:12:33.705 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.964 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:33.964 NULL1 00:12:33.964 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:34.223 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1044677 00:12:34.223 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:34.223 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:34.223 15:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.482 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.418 Read completed with error (sct=0, sc=11) 00:12:35.418 15:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.676 15:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:35.676 15:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:35.935 true 00:12:35.935 15:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:35.935 15:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.872 15:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.129 15:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:37.129 15:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:37.387 true 00:12:37.387 15:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:37.387 15:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.670 15:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.928 15:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:37.928 15:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:37.928 true 00:12:38.186 15:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:38.186 15:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.443 15:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.702 15:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:38.702 15:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:38.702 true 00:12:38.960 15:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:38.960 15:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.893 15:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.150 15:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:40.150 15:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:40.407 true 00:12:40.407 15:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:40.407 15:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.665 15:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.922 15:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:40.922 15:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:41.180 true 00:12:41.180 15:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:41.180 15:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.116 15:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:42.116 15:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:42.116 15:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:42.373 true 00:12:42.373 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:42.373 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.629 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.886 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:42.886 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:43.144 true 00:12:43.144 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:43.144 15:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.077 15:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.335 15:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:44.335 15:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:44.593 true 00:12:44.593 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:44.593 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.851 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.108 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:45.108 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:45.365 true 00:12:45.365 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:45.365 15:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.301 15:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.560 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:46.560 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:46.560 true 00:12:46.560 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:46.560 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.819 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.078 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:47.078 15:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:47.336 true 00:12:47.336 15:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:47.336 15:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.276 15:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.533 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:48.533 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:48.791 true 00:12:48.792 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:48.792 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.050 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.308 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:49.308 15:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:49.565 true 00:12:49.565 15:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:49.565 15:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.502 15:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.762 15:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:50.762 15:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:51.022 true 00:12:51.022 15:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:51.022 15:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.022 15:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.282 15:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:51.282 15:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:51.541 true 00:12:51.541 15:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:51.541 15:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:52.481 15:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:52.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:52.767 15:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:52.767 15:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:53.023 true 00:12:53.023 15:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:53.023 15:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.280 15:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.537 15:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:53.537 15:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:53.794 true 00:12:53.794 15:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:53.794 15:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.728 15:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.986 15:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:54.986 15:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:55.244 true 00:12:55.244 15:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:55.244 15:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.502 15:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.761 15:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:55.761 15:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:56.018 true 00:12:56.018 15:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:56.019 15:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.954 15:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.954 15:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:56.954 15:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:57.212 true 00:12:57.212 15:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:57.212 15:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.470 15:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.728 15:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:57.728 15:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:57.986 true 00:12:57.986 15:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:57.986 15:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.245 15:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.502 15:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:58.503 15:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:58.760 true 00:12:58.760 15:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:12:58.760 15:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.697 15:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.956 15:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:59.956 15:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:00.215 true 00:13:00.215 15:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:13:00.215 15:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.473 15:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.731 15:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:00.731 15:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:00.989 true 00:13:00.989 15:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:13:00.989 15:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.926 15:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.185 15:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:02.185 15:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:02.185 true 00:13:02.185 15:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:13:02.185 15:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.442 15:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.700 15:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:02.700 15:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:02.958 true 00:13:02.958 15:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:13:02.958 15:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.894 15:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.153 15:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:04.153 15:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:04.411 true 00:13:04.411 15:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:13:04.411 15:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.411 Initializing NVMe Controllers 00:13:04.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.411 Controller IO queue size 128, less than required. 00:13:04.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:04.411 Controller IO queue size 128, less than required. 00:13:04.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:04.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:04.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:04.411 Initialization complete. Launching workers. 00:13:04.411 ======================================================== 00:13:04.411 Latency(us) 00:13:04.411 Device Information : IOPS MiB/s Average min max 00:13:04.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 756.14 0.37 88381.36 3280.85 1014479.79 00:13:04.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11152.39 5.45 11478.24 2951.63 454104.44 00:13:04.411 ======================================================== 00:13:04.411 Total : 11908.53 5.81 16361.26 2951.63 1014479.79 00:13:04.411 00:13:04.669 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.926 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:04.926 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:05.184 true 00:13:05.184 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1044677 00:13:05.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1044677) - No such process 00:13:05.184 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1044677 00:13:05.184 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.472 15:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:05.739 null0 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.739 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:05.996 null1 00:13:05.996 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.996 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.996 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:06.252 null2 00:13:06.252 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.252 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.252 15:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:06.509 null3 00:13:06.509 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.509 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.509 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:06.766 null4 00:13:06.766 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.766 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.766 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:07.024 null5 00:13:07.024 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.024 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.024 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:07.281 null6 00:13:07.281 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.281 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.281 15:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:07.539 null7 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.539 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1049225 1049226 1049227 1049230 1049232 1049234 1049236 1049238 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.540 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.797 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.054 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.055 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.055 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.055 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.055 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.055 15:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.312 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.569 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.827 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.085 15:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.342 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.342 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:09.342 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:09.600 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.600 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.600 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.600 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:09.600 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.859 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.117 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.375 15:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.632 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.890 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.148 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.419 15:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.419 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.683 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.942 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.943 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.200 15:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.457 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.714 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.972 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.972 rmmod nvme_tcp 00:13:12.972 rmmod nvme_fabrics 00:13:12.972 rmmod nvme_keyring 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1044383 ']' 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1044383 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1044383 ']' 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1044383 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1044383 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1044383' 00:13:13.230 killing process with pid 1044383 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1044383 00:13:13.230 15:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1044383 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.488 15:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.388 15:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.388 00:13:15.388 real 0m45.948s 00:13:15.388 user 3m29.355s 00:13:15.388 sys 0m16.122s 00:13:15.388 15:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.388 15:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.388 ************************************ 00:13:15.388 END TEST nvmf_ns_hotplug_stress 00:13:15.388 ************************************ 00:13:15.388 15:24:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.388 15:24:46 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.388 15:24:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.388 15:24:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.388 15:24:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.388 ************************************ 00:13:15.388 START TEST nvmf_connect_stress 00:13:15.388 ************************************ 00:13:15.388 15:24:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.646 * Looking for test storage... 00:13:15.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.646 15:24:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.647 15:24:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:17.548 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:17.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:13:17.549 00:13:17.549 --- 10.0.0.2 ping statistics --- 00:13:17.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.549 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:13:17.549 00:13:17.549 --- 10.0.0.1 ping statistics --- 00:13:17.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.549 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1051987 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1051987 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1051987 ']' 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.549 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.807 [2024-07-13 15:24:48.348517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:17.807 [2024-07-13 15:24:48.348602] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.807 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.807 [2024-07-13 15:24:48.386395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:17.807 [2024-07-13 15:24:48.418540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.807 [2024-07-13 15:24:48.508379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.807 [2024-07-13 15:24:48.508446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.807 [2024-07-13 15:24:48.508463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.807 [2024-07-13 15:24:48.508477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.807 [2024-07-13 15:24:48.508496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.807 [2024-07-13 15:24:48.508580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.807 [2024-07-13 15:24:48.508705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.807 [2024-07-13 15:24:48.508708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.065 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.065 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:18.065 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.065 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 [2024-07-13 15:24:48.655463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 [2024-07-13 15:24:48.685018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.066 NULL1 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1052016 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.066 15:24:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.324 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.324 15:24:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:18.324 15:24:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.324 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.324 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.890 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.890 15:24:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:18.890 15:24:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.890 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.890 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.149 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.149 15:24:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:19.149 15:24:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.149 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.149 15:24:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.466 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.466 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:19.466 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.466 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.466 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.724 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.724 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:19.724 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.724 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.724 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.981 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.981 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:19.981 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.981 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.981 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.239 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.239 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:20.239 15:24:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.239 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.239 15:24:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.804 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.804 15:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:20.804 15:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.804 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.804 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.061 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.061 15:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:21.061 15:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.061 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.061 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.319 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.319 15:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:21.319 15:24:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.319 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.319 15:24:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.576 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.576 15:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:21.576 15:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.576 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.576 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.834 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.834 15:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:21.834 15:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.834 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.834 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.397 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.397 15:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:22.397 15:24:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.397 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.397 15:24:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.654 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.654 15:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:22.654 15:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.654 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.654 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.912 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.912 15:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:22.912 15:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.912 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.912 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.170 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.170 15:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:23.170 15:24:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.170 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.170 15:24:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.735 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.735 15:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:23.735 15:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.735 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.735 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.992 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.992 15:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:23.992 15:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.992 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.992 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.248 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.248 15:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:24.248 15:24:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.248 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.248 15:24:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.506 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.506 15:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:24.506 15:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.506 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.506 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.764 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.764 15:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:24.764 15:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.764 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.764 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.329 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.329 15:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:25.329 15:24:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.329 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.329 15:24:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.587 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.587 15:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:25.587 15:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.587 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.587 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.844 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.844 15:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:25.844 15:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.844 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.844 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.101 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.101 15:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:26.101 15:24:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.101 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.101 15:24:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.360 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.360 15:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:26.360 15:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.360 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.360 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.924 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.924 15:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:26.924 15:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.924 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.924 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.182 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.182 15:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:27.182 15:24:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.182 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.182 15:24:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.439 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.439 15:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:27.439 15:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.439 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.439 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.697 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.697 15:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:27.697 15:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.697 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.697 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.953 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.953 15:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:27.953 15:24:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.953 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.953 15:24:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1052016 00:13:28.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1052016) - No such process 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1052016 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.467 rmmod nvme_tcp 00:13:28.467 rmmod nvme_fabrics 00:13:28.467 rmmod nvme_keyring 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1051987 ']' 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1051987 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1051987 ']' 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1051987 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1051987 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1051987' 00:13:28.467 killing process with pid 1051987 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1051987 00:13:28.467 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1051987 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.725 15:24:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.626 15:25:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.626 00:13:30.626 real 0m15.263s 00:13:30.626 user 0m38.122s 00:13:30.626 sys 0m5.962s 00:13:30.626 15:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.626 15:25:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.626 ************************************ 00:13:30.626 END TEST nvmf_connect_stress 00:13:30.626 ************************************ 00:13:30.884 15:25:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.884 15:25:01 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.884 15:25:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.884 15:25:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.884 15:25:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.884 ************************************ 00:13:30.884 START TEST nvmf_fused_ordering 00:13:30.884 ************************************ 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.884 * Looking for test storage... 00:13:30.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.884 15:25:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.885 15:25:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.885 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.885 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.885 15:25:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.885 15:25:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:32.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:32.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.794 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:32.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:32.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:13:32.795 00:13:32.795 --- 10.0.0.2 ping statistics --- 00:13:32.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.795 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:13:32.795 00:13:32.795 --- 10.0.0.1 ping statistics --- 00:13:32.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.795 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1055228 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1055228 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1055228 ']' 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.795 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.053 [2024-07-13 15:25:03.564095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:33.053 [2024-07-13 15:25:03.564171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.053 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.054 [2024-07-13 15:25:03.601375] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:33.054 [2024-07-13 15:25:03.628558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.054 [2024-07-13 15:25:03.719446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.054 [2024-07-13 15:25:03.719512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.054 [2024-07-13 15:25:03.719529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.054 [2024-07-13 15:25:03.719543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.054 [2024-07-13 15:25:03.719555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.054 [2024-07-13 15:25:03.719584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 [2024-07-13 15:25:03.867532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 [2024-07-13 15:25:03.883705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 NULL1 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.320 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:33.321 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.321 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.321 15:25:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.321 15:25:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:33.321 [2024-07-13 15:25:03.928896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:33.321 [2024-07-13 15:25:03.928952] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055303 ] 00:13:33.321 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.321 [2024-07-13 15:25:03.961639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:33.899 Attached to nqn.2016-06.io.spdk:cnode1 00:13:33.899 Namespace ID: 1 size: 1GB 00:13:33.899 fused_ordering(0) 00:13:33.899 fused_ordering(1) 00:13:33.899 fused_ordering(2) 00:13:33.899 fused_ordering(3) 00:13:33.899 fused_ordering(4) 00:13:33.899 fused_ordering(5) 00:13:33.899 fused_ordering(6) 00:13:33.899 fused_ordering(7) 00:13:33.899 fused_ordering(8) 00:13:33.899 fused_ordering(9) 00:13:33.899 fused_ordering(10) 00:13:33.899 fused_ordering(11) 00:13:33.899 fused_ordering(12) 00:13:33.899 fused_ordering(13) 00:13:33.899 fused_ordering(14) 00:13:33.899 fused_ordering(15) 00:13:33.899 fused_ordering(16) 00:13:33.899 fused_ordering(17) 00:13:33.899 fused_ordering(18) 00:13:33.899 fused_ordering(19) 00:13:33.899 fused_ordering(20) 00:13:33.899 fused_ordering(21) 00:13:33.899 fused_ordering(22) 00:13:33.899 fused_ordering(23) 00:13:33.899 fused_ordering(24) 00:13:33.899 fused_ordering(25) 00:13:33.899 fused_ordering(26) 00:13:33.899 fused_ordering(27) 00:13:33.899 fused_ordering(28) 00:13:33.899 fused_ordering(29) 00:13:33.899 fused_ordering(30) 00:13:33.899 fused_ordering(31) 00:13:33.899 fused_ordering(32) 00:13:33.899 fused_ordering(33) 00:13:33.899 fused_ordering(34) 00:13:33.899 fused_ordering(35) 00:13:33.899 fused_ordering(36) 00:13:33.899 fused_ordering(37) 00:13:33.899 fused_ordering(38) 00:13:33.899 fused_ordering(39) 00:13:33.899 fused_ordering(40) 00:13:33.899 fused_ordering(41) 00:13:33.899 fused_ordering(42) 00:13:33.899 fused_ordering(43) 00:13:33.899 fused_ordering(44) 00:13:33.899 fused_ordering(45) 00:13:33.899 fused_ordering(46) 00:13:33.899 fused_ordering(47) 00:13:33.899 fused_ordering(48) 00:13:33.899 fused_ordering(49) 00:13:33.899 fused_ordering(50) 00:13:33.899 fused_ordering(51) 00:13:33.899 fused_ordering(52) 00:13:33.899 fused_ordering(53) 00:13:33.899 fused_ordering(54) 00:13:33.899 fused_ordering(55) 00:13:33.899 fused_ordering(56) 00:13:33.899 fused_ordering(57) 00:13:33.899 fused_ordering(58) 00:13:33.899 fused_ordering(59) 00:13:33.899 fused_ordering(60) 00:13:33.899 fused_ordering(61) 00:13:33.899 fused_ordering(62) 00:13:33.899 fused_ordering(63) 00:13:33.899 fused_ordering(64) 00:13:33.899 fused_ordering(65) 00:13:33.899 fused_ordering(66) 00:13:33.899 fused_ordering(67) 00:13:33.899 fused_ordering(68) 00:13:33.899 fused_ordering(69) 00:13:33.899 fused_ordering(70) 00:13:33.899 fused_ordering(71) 00:13:33.899 fused_ordering(72) 00:13:33.899 fused_ordering(73) 00:13:33.899 fused_ordering(74) 00:13:33.899 fused_ordering(75) 00:13:33.899 fused_ordering(76) 00:13:33.899 fused_ordering(77) 00:13:33.899 fused_ordering(78) 00:13:33.899 fused_ordering(79) 00:13:33.899 fused_ordering(80) 00:13:33.899 fused_ordering(81) 00:13:33.899 fused_ordering(82) 00:13:33.899 fused_ordering(83) 00:13:33.899 fused_ordering(84) 00:13:33.899 fused_ordering(85) 00:13:33.899 fused_ordering(86) 00:13:33.899 fused_ordering(87) 00:13:33.899 fused_ordering(88) 00:13:33.899 fused_ordering(89) 00:13:33.899 fused_ordering(90) 00:13:33.899 fused_ordering(91) 00:13:33.899 fused_ordering(92) 00:13:33.899 fused_ordering(93) 00:13:33.899 fused_ordering(94) 00:13:33.899 fused_ordering(95) 00:13:33.899 fused_ordering(96) 00:13:33.899 fused_ordering(97) 00:13:33.899 fused_ordering(98) 00:13:33.899 fused_ordering(99) 00:13:33.899 fused_ordering(100) 00:13:33.899 fused_ordering(101) 00:13:33.899 fused_ordering(102) 00:13:33.899 fused_ordering(103) 00:13:33.899 fused_ordering(104) 00:13:33.899 fused_ordering(105) 00:13:33.899 fused_ordering(106) 00:13:33.899 fused_ordering(107) 00:13:33.899 fused_ordering(108) 00:13:33.899 fused_ordering(109) 00:13:33.899 fused_ordering(110) 00:13:33.899 fused_ordering(111) 00:13:33.899 fused_ordering(112) 00:13:33.899 fused_ordering(113) 00:13:33.899 fused_ordering(114) 00:13:33.899 fused_ordering(115) 00:13:33.899 fused_ordering(116) 00:13:33.899 fused_ordering(117) 00:13:33.899 fused_ordering(118) 00:13:33.899 fused_ordering(119) 00:13:33.899 fused_ordering(120) 00:13:33.899 fused_ordering(121) 00:13:33.899 fused_ordering(122) 00:13:33.899 fused_ordering(123) 00:13:33.899 fused_ordering(124) 00:13:33.899 fused_ordering(125) 00:13:33.899 fused_ordering(126) 00:13:33.899 fused_ordering(127) 00:13:33.899 fused_ordering(128) 00:13:33.899 fused_ordering(129) 00:13:33.899 fused_ordering(130) 00:13:33.899 fused_ordering(131) 00:13:33.899 fused_ordering(132) 00:13:33.899 fused_ordering(133) 00:13:33.899 fused_ordering(134) 00:13:33.899 fused_ordering(135) 00:13:33.899 fused_ordering(136) 00:13:33.899 fused_ordering(137) 00:13:33.899 fused_ordering(138) 00:13:33.899 fused_ordering(139) 00:13:33.899 fused_ordering(140) 00:13:33.899 fused_ordering(141) 00:13:33.899 fused_ordering(142) 00:13:33.899 fused_ordering(143) 00:13:33.899 fused_ordering(144) 00:13:33.899 fused_ordering(145) 00:13:33.899 fused_ordering(146) 00:13:33.899 fused_ordering(147) 00:13:33.899 fused_ordering(148) 00:13:33.899 fused_ordering(149) 00:13:33.899 fused_ordering(150) 00:13:33.899 fused_ordering(151) 00:13:33.899 fused_ordering(152) 00:13:33.900 fused_ordering(153) 00:13:33.900 fused_ordering(154) 00:13:33.900 fused_ordering(155) 00:13:33.900 fused_ordering(156) 00:13:33.900 fused_ordering(157) 00:13:33.900 fused_ordering(158) 00:13:33.900 fused_ordering(159) 00:13:33.900 fused_ordering(160) 00:13:33.900 fused_ordering(161) 00:13:33.900 fused_ordering(162) 00:13:33.900 fused_ordering(163) 00:13:33.900 fused_ordering(164) 00:13:33.900 fused_ordering(165) 00:13:33.900 fused_ordering(166) 00:13:33.900 fused_ordering(167) 00:13:33.900 fused_ordering(168) 00:13:33.900 fused_ordering(169) 00:13:33.900 fused_ordering(170) 00:13:33.900 fused_ordering(171) 00:13:33.900 fused_ordering(172) 00:13:33.900 fused_ordering(173) 00:13:33.900 fused_ordering(174) 00:13:33.900 fused_ordering(175) 00:13:33.900 fused_ordering(176) 00:13:33.900 fused_ordering(177) 00:13:33.900 fused_ordering(178) 00:13:33.900 fused_ordering(179) 00:13:33.900 fused_ordering(180) 00:13:33.900 fused_ordering(181) 00:13:33.900 fused_ordering(182) 00:13:33.900 fused_ordering(183) 00:13:33.900 fused_ordering(184) 00:13:33.900 fused_ordering(185) 00:13:33.900 fused_ordering(186) 00:13:33.900 fused_ordering(187) 00:13:33.900 fused_ordering(188) 00:13:33.900 fused_ordering(189) 00:13:33.900 fused_ordering(190) 00:13:33.900 fused_ordering(191) 00:13:33.900 fused_ordering(192) 00:13:33.900 fused_ordering(193) 00:13:33.900 fused_ordering(194) 00:13:33.900 fused_ordering(195) 00:13:33.900 fused_ordering(196) 00:13:33.900 fused_ordering(197) 00:13:33.900 fused_ordering(198) 00:13:33.900 fused_ordering(199) 00:13:33.900 fused_ordering(200) 00:13:33.900 fused_ordering(201) 00:13:33.900 fused_ordering(202) 00:13:33.900 fused_ordering(203) 00:13:33.900 fused_ordering(204) 00:13:33.900 fused_ordering(205) 00:13:34.158 fused_ordering(206) 00:13:34.158 fused_ordering(207) 00:13:34.158 fused_ordering(208) 00:13:34.158 fused_ordering(209) 00:13:34.158 fused_ordering(210) 00:13:34.158 fused_ordering(211) 00:13:34.158 fused_ordering(212) 00:13:34.158 fused_ordering(213) 00:13:34.158 fused_ordering(214) 00:13:34.158 fused_ordering(215) 00:13:34.158 fused_ordering(216) 00:13:34.158 fused_ordering(217) 00:13:34.158 fused_ordering(218) 00:13:34.158 fused_ordering(219) 00:13:34.158 fused_ordering(220) 00:13:34.158 fused_ordering(221) 00:13:34.158 fused_ordering(222) 00:13:34.158 fused_ordering(223) 00:13:34.158 fused_ordering(224) 00:13:34.158 fused_ordering(225) 00:13:34.158 fused_ordering(226) 00:13:34.158 fused_ordering(227) 00:13:34.158 fused_ordering(228) 00:13:34.158 fused_ordering(229) 00:13:34.158 fused_ordering(230) 00:13:34.158 fused_ordering(231) 00:13:34.158 fused_ordering(232) 00:13:34.158 fused_ordering(233) 00:13:34.158 fused_ordering(234) 00:13:34.158 fused_ordering(235) 00:13:34.158 fused_ordering(236) 00:13:34.158 fused_ordering(237) 00:13:34.158 fused_ordering(238) 00:13:34.158 fused_ordering(239) 00:13:34.158 fused_ordering(240) 00:13:34.158 fused_ordering(241) 00:13:34.158 fused_ordering(242) 00:13:34.158 fused_ordering(243) 00:13:34.158 fused_ordering(244) 00:13:34.158 fused_ordering(245) 00:13:34.158 fused_ordering(246) 00:13:34.158 fused_ordering(247) 00:13:34.158 fused_ordering(248) 00:13:34.158 fused_ordering(249) 00:13:34.158 fused_ordering(250) 00:13:34.158 fused_ordering(251) 00:13:34.158 fused_ordering(252) 00:13:34.158 fused_ordering(253) 00:13:34.158 fused_ordering(254) 00:13:34.158 fused_ordering(255) 00:13:34.158 fused_ordering(256) 00:13:34.158 fused_ordering(257) 00:13:34.158 fused_ordering(258) 00:13:34.158 fused_ordering(259) 00:13:34.158 fused_ordering(260) 00:13:34.158 fused_ordering(261) 00:13:34.158 fused_ordering(262) 00:13:34.158 fused_ordering(263) 00:13:34.158 fused_ordering(264) 00:13:34.158 fused_ordering(265) 00:13:34.158 fused_ordering(266) 00:13:34.158 fused_ordering(267) 00:13:34.158 fused_ordering(268) 00:13:34.158 fused_ordering(269) 00:13:34.158 fused_ordering(270) 00:13:34.158 fused_ordering(271) 00:13:34.158 fused_ordering(272) 00:13:34.158 fused_ordering(273) 00:13:34.158 fused_ordering(274) 00:13:34.158 fused_ordering(275) 00:13:34.158 fused_ordering(276) 00:13:34.158 fused_ordering(277) 00:13:34.158 fused_ordering(278) 00:13:34.158 fused_ordering(279) 00:13:34.158 fused_ordering(280) 00:13:34.158 fused_ordering(281) 00:13:34.158 fused_ordering(282) 00:13:34.158 fused_ordering(283) 00:13:34.158 fused_ordering(284) 00:13:34.158 fused_ordering(285) 00:13:34.158 fused_ordering(286) 00:13:34.158 fused_ordering(287) 00:13:34.158 fused_ordering(288) 00:13:34.158 fused_ordering(289) 00:13:34.158 fused_ordering(290) 00:13:34.158 fused_ordering(291) 00:13:34.158 fused_ordering(292) 00:13:34.158 fused_ordering(293) 00:13:34.158 fused_ordering(294) 00:13:34.158 fused_ordering(295) 00:13:34.158 fused_ordering(296) 00:13:34.158 fused_ordering(297) 00:13:34.158 fused_ordering(298) 00:13:34.158 fused_ordering(299) 00:13:34.158 fused_ordering(300) 00:13:34.158 fused_ordering(301) 00:13:34.158 fused_ordering(302) 00:13:34.158 fused_ordering(303) 00:13:34.158 fused_ordering(304) 00:13:34.158 fused_ordering(305) 00:13:34.158 fused_ordering(306) 00:13:34.158 fused_ordering(307) 00:13:34.158 fused_ordering(308) 00:13:34.158 fused_ordering(309) 00:13:34.158 fused_ordering(310) 00:13:34.158 fused_ordering(311) 00:13:34.158 fused_ordering(312) 00:13:34.158 fused_ordering(313) 00:13:34.158 fused_ordering(314) 00:13:34.158 fused_ordering(315) 00:13:34.158 fused_ordering(316) 00:13:34.158 fused_ordering(317) 00:13:34.158 fused_ordering(318) 00:13:34.158 fused_ordering(319) 00:13:34.158 fused_ordering(320) 00:13:34.158 fused_ordering(321) 00:13:34.158 fused_ordering(322) 00:13:34.158 fused_ordering(323) 00:13:34.158 fused_ordering(324) 00:13:34.158 fused_ordering(325) 00:13:34.158 fused_ordering(326) 00:13:34.158 fused_ordering(327) 00:13:34.158 fused_ordering(328) 00:13:34.158 fused_ordering(329) 00:13:34.158 fused_ordering(330) 00:13:34.158 fused_ordering(331) 00:13:34.158 fused_ordering(332) 00:13:34.158 fused_ordering(333) 00:13:34.158 fused_ordering(334) 00:13:34.158 fused_ordering(335) 00:13:34.158 fused_ordering(336) 00:13:34.158 fused_ordering(337) 00:13:34.158 fused_ordering(338) 00:13:34.158 fused_ordering(339) 00:13:34.159 fused_ordering(340) 00:13:34.159 fused_ordering(341) 00:13:34.159 fused_ordering(342) 00:13:34.159 fused_ordering(343) 00:13:34.159 fused_ordering(344) 00:13:34.159 fused_ordering(345) 00:13:34.159 fused_ordering(346) 00:13:34.159 fused_ordering(347) 00:13:34.159 fused_ordering(348) 00:13:34.159 fused_ordering(349) 00:13:34.159 fused_ordering(350) 00:13:34.159 fused_ordering(351) 00:13:34.159 fused_ordering(352) 00:13:34.159 fused_ordering(353) 00:13:34.159 fused_ordering(354) 00:13:34.159 fused_ordering(355) 00:13:34.159 fused_ordering(356) 00:13:34.159 fused_ordering(357) 00:13:34.159 fused_ordering(358) 00:13:34.159 fused_ordering(359) 00:13:34.159 fused_ordering(360) 00:13:34.159 fused_ordering(361) 00:13:34.159 fused_ordering(362) 00:13:34.159 fused_ordering(363) 00:13:34.159 fused_ordering(364) 00:13:34.159 fused_ordering(365) 00:13:34.159 fused_ordering(366) 00:13:34.159 fused_ordering(367) 00:13:34.159 fused_ordering(368) 00:13:34.159 fused_ordering(369) 00:13:34.159 fused_ordering(370) 00:13:34.159 fused_ordering(371) 00:13:34.159 fused_ordering(372) 00:13:34.159 fused_ordering(373) 00:13:34.159 fused_ordering(374) 00:13:34.159 fused_ordering(375) 00:13:34.159 fused_ordering(376) 00:13:34.159 fused_ordering(377) 00:13:34.159 fused_ordering(378) 00:13:34.159 fused_ordering(379) 00:13:34.159 fused_ordering(380) 00:13:34.159 fused_ordering(381) 00:13:34.159 fused_ordering(382) 00:13:34.159 fused_ordering(383) 00:13:34.159 fused_ordering(384) 00:13:34.159 fused_ordering(385) 00:13:34.159 fused_ordering(386) 00:13:34.159 fused_ordering(387) 00:13:34.159 fused_ordering(388) 00:13:34.159 fused_ordering(389) 00:13:34.159 fused_ordering(390) 00:13:34.159 fused_ordering(391) 00:13:34.159 fused_ordering(392) 00:13:34.159 fused_ordering(393) 00:13:34.159 fused_ordering(394) 00:13:34.159 fused_ordering(395) 00:13:34.159 fused_ordering(396) 00:13:34.159 fused_ordering(397) 00:13:34.159 fused_ordering(398) 00:13:34.159 fused_ordering(399) 00:13:34.159 fused_ordering(400) 00:13:34.159 fused_ordering(401) 00:13:34.159 fused_ordering(402) 00:13:34.159 fused_ordering(403) 00:13:34.159 fused_ordering(404) 00:13:34.159 fused_ordering(405) 00:13:34.159 fused_ordering(406) 00:13:34.159 fused_ordering(407) 00:13:34.159 fused_ordering(408) 00:13:34.159 fused_ordering(409) 00:13:34.159 fused_ordering(410) 00:13:34.723 fused_ordering(411) 00:13:34.723 fused_ordering(412) 00:13:34.723 fused_ordering(413) 00:13:34.723 fused_ordering(414) 00:13:34.723 fused_ordering(415) 00:13:34.723 fused_ordering(416) 00:13:34.723 fused_ordering(417) 00:13:34.723 fused_ordering(418) 00:13:34.723 fused_ordering(419) 00:13:34.723 fused_ordering(420) 00:13:34.724 fused_ordering(421) 00:13:34.724 fused_ordering(422) 00:13:34.724 fused_ordering(423) 00:13:34.724 fused_ordering(424) 00:13:34.724 fused_ordering(425) 00:13:34.724 fused_ordering(426) 00:13:34.724 fused_ordering(427) 00:13:34.724 fused_ordering(428) 00:13:34.724 fused_ordering(429) 00:13:34.724 fused_ordering(430) 00:13:34.724 fused_ordering(431) 00:13:34.724 fused_ordering(432) 00:13:34.724 fused_ordering(433) 00:13:34.724 fused_ordering(434) 00:13:34.724 fused_ordering(435) 00:13:34.724 fused_ordering(436) 00:13:34.724 fused_ordering(437) 00:13:34.724 fused_ordering(438) 00:13:34.724 fused_ordering(439) 00:13:34.724 fused_ordering(440) 00:13:34.724 fused_ordering(441) 00:13:34.724 fused_ordering(442) 00:13:34.724 fused_ordering(443) 00:13:34.724 fused_ordering(444) 00:13:34.724 fused_ordering(445) 00:13:34.724 fused_ordering(446) 00:13:34.724 fused_ordering(447) 00:13:34.724 fused_ordering(448) 00:13:34.724 fused_ordering(449) 00:13:34.724 fused_ordering(450) 00:13:34.724 fused_ordering(451) 00:13:34.724 fused_ordering(452) 00:13:34.724 fused_ordering(453) 00:13:34.724 fused_ordering(454) 00:13:34.724 fused_ordering(455) 00:13:34.724 fused_ordering(456) 00:13:34.724 fused_ordering(457) 00:13:34.724 fused_ordering(458) 00:13:34.724 fused_ordering(459) 00:13:34.724 fused_ordering(460) 00:13:34.724 fused_ordering(461) 00:13:34.724 fused_ordering(462) 00:13:34.724 fused_ordering(463) 00:13:34.724 fused_ordering(464) 00:13:34.724 fused_ordering(465) 00:13:34.724 fused_ordering(466) 00:13:34.724 fused_ordering(467) 00:13:34.724 fused_ordering(468) 00:13:34.724 fused_ordering(469) 00:13:34.724 fused_ordering(470) 00:13:34.724 fused_ordering(471) 00:13:34.724 fused_ordering(472) 00:13:34.724 fused_ordering(473) 00:13:34.724 fused_ordering(474) 00:13:34.724 fused_ordering(475) 00:13:34.724 fused_ordering(476) 00:13:34.724 fused_ordering(477) 00:13:34.724 fused_ordering(478) 00:13:34.724 fused_ordering(479) 00:13:34.724 fused_ordering(480) 00:13:34.724 fused_ordering(481) 00:13:34.724 fused_ordering(482) 00:13:34.724 fused_ordering(483) 00:13:34.724 fused_ordering(484) 00:13:34.724 fused_ordering(485) 00:13:34.724 fused_ordering(486) 00:13:34.724 fused_ordering(487) 00:13:34.724 fused_ordering(488) 00:13:34.724 fused_ordering(489) 00:13:34.724 fused_ordering(490) 00:13:34.724 fused_ordering(491) 00:13:34.724 fused_ordering(492) 00:13:34.724 fused_ordering(493) 00:13:34.724 fused_ordering(494) 00:13:34.724 fused_ordering(495) 00:13:34.724 fused_ordering(496) 00:13:34.724 fused_ordering(497) 00:13:34.724 fused_ordering(498) 00:13:34.724 fused_ordering(499) 00:13:34.724 fused_ordering(500) 00:13:34.724 fused_ordering(501) 00:13:34.724 fused_ordering(502) 00:13:34.724 fused_ordering(503) 00:13:34.724 fused_ordering(504) 00:13:34.724 fused_ordering(505) 00:13:34.724 fused_ordering(506) 00:13:34.724 fused_ordering(507) 00:13:34.724 fused_ordering(508) 00:13:34.724 fused_ordering(509) 00:13:34.724 fused_ordering(510) 00:13:34.724 fused_ordering(511) 00:13:34.724 fused_ordering(512) 00:13:34.724 fused_ordering(513) 00:13:34.724 fused_ordering(514) 00:13:34.724 fused_ordering(515) 00:13:34.724 fused_ordering(516) 00:13:34.724 fused_ordering(517) 00:13:34.724 fused_ordering(518) 00:13:34.724 fused_ordering(519) 00:13:34.724 fused_ordering(520) 00:13:34.724 fused_ordering(521) 00:13:34.724 fused_ordering(522) 00:13:34.724 fused_ordering(523) 00:13:34.724 fused_ordering(524) 00:13:34.724 fused_ordering(525) 00:13:34.724 fused_ordering(526) 00:13:34.724 fused_ordering(527) 00:13:34.724 fused_ordering(528) 00:13:34.724 fused_ordering(529) 00:13:34.724 fused_ordering(530) 00:13:34.724 fused_ordering(531) 00:13:34.724 fused_ordering(532) 00:13:34.724 fused_ordering(533) 00:13:34.724 fused_ordering(534) 00:13:34.724 fused_ordering(535) 00:13:34.724 fused_ordering(536) 00:13:34.724 fused_ordering(537) 00:13:34.724 fused_ordering(538) 00:13:34.724 fused_ordering(539) 00:13:34.724 fused_ordering(540) 00:13:34.724 fused_ordering(541) 00:13:34.724 fused_ordering(542) 00:13:34.724 fused_ordering(543) 00:13:34.724 fused_ordering(544) 00:13:34.724 fused_ordering(545) 00:13:34.724 fused_ordering(546) 00:13:34.724 fused_ordering(547) 00:13:34.724 fused_ordering(548) 00:13:34.724 fused_ordering(549) 00:13:34.724 fused_ordering(550) 00:13:34.724 fused_ordering(551) 00:13:34.724 fused_ordering(552) 00:13:34.724 fused_ordering(553) 00:13:34.724 fused_ordering(554) 00:13:34.724 fused_ordering(555) 00:13:34.724 fused_ordering(556) 00:13:34.724 fused_ordering(557) 00:13:34.724 fused_ordering(558) 00:13:34.724 fused_ordering(559) 00:13:34.724 fused_ordering(560) 00:13:34.724 fused_ordering(561) 00:13:34.724 fused_ordering(562) 00:13:34.724 fused_ordering(563) 00:13:34.724 fused_ordering(564) 00:13:34.724 fused_ordering(565) 00:13:34.724 fused_ordering(566) 00:13:34.724 fused_ordering(567) 00:13:34.724 fused_ordering(568) 00:13:34.724 fused_ordering(569) 00:13:34.724 fused_ordering(570) 00:13:34.724 fused_ordering(571) 00:13:34.724 fused_ordering(572) 00:13:34.724 fused_ordering(573) 00:13:34.724 fused_ordering(574) 00:13:34.724 fused_ordering(575) 00:13:34.724 fused_ordering(576) 00:13:34.724 fused_ordering(577) 00:13:34.724 fused_ordering(578) 00:13:34.724 fused_ordering(579) 00:13:34.724 fused_ordering(580) 00:13:34.724 fused_ordering(581) 00:13:34.724 fused_ordering(582) 00:13:34.724 fused_ordering(583) 00:13:34.724 fused_ordering(584) 00:13:34.724 fused_ordering(585) 00:13:34.724 fused_ordering(586) 00:13:34.724 fused_ordering(587) 00:13:34.724 fused_ordering(588) 00:13:34.724 fused_ordering(589) 00:13:34.724 fused_ordering(590) 00:13:34.724 fused_ordering(591) 00:13:34.724 fused_ordering(592) 00:13:34.724 fused_ordering(593) 00:13:34.724 fused_ordering(594) 00:13:34.724 fused_ordering(595) 00:13:34.724 fused_ordering(596) 00:13:34.724 fused_ordering(597) 00:13:34.724 fused_ordering(598) 00:13:34.724 fused_ordering(599) 00:13:34.724 fused_ordering(600) 00:13:34.724 fused_ordering(601) 00:13:34.724 fused_ordering(602) 00:13:34.724 fused_ordering(603) 00:13:34.724 fused_ordering(604) 00:13:34.724 fused_ordering(605) 00:13:34.724 fused_ordering(606) 00:13:34.724 fused_ordering(607) 00:13:34.724 fused_ordering(608) 00:13:34.724 fused_ordering(609) 00:13:34.725 fused_ordering(610) 00:13:34.725 fused_ordering(611) 00:13:34.725 fused_ordering(612) 00:13:34.725 fused_ordering(613) 00:13:34.725 fused_ordering(614) 00:13:34.725 fused_ordering(615) 00:13:35.291 fused_ordering(616) 00:13:35.291 fused_ordering(617) 00:13:35.291 fused_ordering(618) 00:13:35.291 fused_ordering(619) 00:13:35.291 fused_ordering(620) 00:13:35.291 fused_ordering(621) 00:13:35.291 fused_ordering(622) 00:13:35.291 fused_ordering(623) 00:13:35.291 fused_ordering(624) 00:13:35.291 fused_ordering(625) 00:13:35.291 fused_ordering(626) 00:13:35.291 fused_ordering(627) 00:13:35.291 fused_ordering(628) 00:13:35.291 fused_ordering(629) 00:13:35.291 fused_ordering(630) 00:13:35.291 fused_ordering(631) 00:13:35.291 fused_ordering(632) 00:13:35.291 fused_ordering(633) 00:13:35.291 fused_ordering(634) 00:13:35.291 fused_ordering(635) 00:13:35.291 fused_ordering(636) 00:13:35.291 fused_ordering(637) 00:13:35.291 fused_ordering(638) 00:13:35.291 fused_ordering(639) 00:13:35.291 fused_ordering(640) 00:13:35.291 fused_ordering(641) 00:13:35.291 fused_ordering(642) 00:13:35.291 fused_ordering(643) 00:13:35.291 fused_ordering(644) 00:13:35.291 fused_ordering(645) 00:13:35.291 fused_ordering(646) 00:13:35.291 fused_ordering(647) 00:13:35.291 fused_ordering(648) 00:13:35.291 fused_ordering(649) 00:13:35.291 fused_ordering(650) 00:13:35.291 fused_ordering(651) 00:13:35.291 fused_ordering(652) 00:13:35.291 fused_ordering(653) 00:13:35.291 fused_ordering(654) 00:13:35.291 fused_ordering(655) 00:13:35.291 fused_ordering(656) 00:13:35.291 fused_ordering(657) 00:13:35.291 fused_ordering(658) 00:13:35.291 fused_ordering(659) 00:13:35.291 fused_ordering(660) 00:13:35.291 fused_ordering(661) 00:13:35.291 fused_ordering(662) 00:13:35.291 fused_ordering(663) 00:13:35.291 fused_ordering(664) 00:13:35.291 fused_ordering(665) 00:13:35.291 fused_ordering(666) 00:13:35.291 fused_ordering(667) 00:13:35.291 fused_ordering(668) 00:13:35.291 fused_ordering(669) 00:13:35.291 fused_ordering(670) 00:13:35.291 fused_ordering(671) 00:13:35.291 fused_ordering(672) 00:13:35.291 fused_ordering(673) 00:13:35.291 fused_ordering(674) 00:13:35.291 fused_ordering(675) 00:13:35.291 fused_ordering(676) 00:13:35.291 fused_ordering(677) 00:13:35.291 fused_ordering(678) 00:13:35.291 fused_ordering(679) 00:13:35.291 fused_ordering(680) 00:13:35.291 fused_ordering(681) 00:13:35.291 fused_ordering(682) 00:13:35.291 fused_ordering(683) 00:13:35.291 fused_ordering(684) 00:13:35.291 fused_ordering(685) 00:13:35.291 fused_ordering(686) 00:13:35.291 fused_ordering(687) 00:13:35.291 fused_ordering(688) 00:13:35.291 fused_ordering(689) 00:13:35.291 fused_ordering(690) 00:13:35.291 fused_ordering(691) 00:13:35.291 fused_ordering(692) 00:13:35.291 fused_ordering(693) 00:13:35.291 fused_ordering(694) 00:13:35.291 fused_ordering(695) 00:13:35.291 fused_ordering(696) 00:13:35.291 fused_ordering(697) 00:13:35.291 fused_ordering(698) 00:13:35.291 fused_ordering(699) 00:13:35.291 fused_ordering(700) 00:13:35.291 fused_ordering(701) 00:13:35.291 fused_ordering(702) 00:13:35.291 fused_ordering(703) 00:13:35.291 fused_ordering(704) 00:13:35.291 fused_ordering(705) 00:13:35.291 fused_ordering(706) 00:13:35.291 fused_ordering(707) 00:13:35.291 fused_ordering(708) 00:13:35.291 fused_ordering(709) 00:13:35.291 fused_ordering(710) 00:13:35.291 fused_ordering(711) 00:13:35.291 fused_ordering(712) 00:13:35.291 fused_ordering(713) 00:13:35.291 fused_ordering(714) 00:13:35.291 fused_ordering(715) 00:13:35.291 fused_ordering(716) 00:13:35.291 fused_ordering(717) 00:13:35.291 fused_ordering(718) 00:13:35.291 fused_ordering(719) 00:13:35.291 fused_ordering(720) 00:13:35.291 fused_ordering(721) 00:13:35.291 fused_ordering(722) 00:13:35.291 fused_ordering(723) 00:13:35.291 fused_ordering(724) 00:13:35.291 fused_ordering(725) 00:13:35.291 fused_ordering(726) 00:13:35.291 fused_ordering(727) 00:13:35.291 fused_ordering(728) 00:13:35.291 fused_ordering(729) 00:13:35.291 fused_ordering(730) 00:13:35.291 fused_ordering(731) 00:13:35.291 fused_ordering(732) 00:13:35.291 fused_ordering(733) 00:13:35.291 fused_ordering(734) 00:13:35.291 fused_ordering(735) 00:13:35.291 fused_ordering(736) 00:13:35.291 fused_ordering(737) 00:13:35.291 fused_ordering(738) 00:13:35.291 fused_ordering(739) 00:13:35.291 fused_ordering(740) 00:13:35.291 fused_ordering(741) 00:13:35.291 fused_ordering(742) 00:13:35.291 fused_ordering(743) 00:13:35.291 fused_ordering(744) 00:13:35.291 fused_ordering(745) 00:13:35.291 fused_ordering(746) 00:13:35.291 fused_ordering(747) 00:13:35.291 fused_ordering(748) 00:13:35.291 fused_ordering(749) 00:13:35.291 fused_ordering(750) 00:13:35.291 fused_ordering(751) 00:13:35.291 fused_ordering(752) 00:13:35.291 fused_ordering(753) 00:13:35.291 fused_ordering(754) 00:13:35.291 fused_ordering(755) 00:13:35.291 fused_ordering(756) 00:13:35.291 fused_ordering(757) 00:13:35.291 fused_ordering(758) 00:13:35.291 fused_ordering(759) 00:13:35.291 fused_ordering(760) 00:13:35.291 fused_ordering(761) 00:13:35.291 fused_ordering(762) 00:13:35.291 fused_ordering(763) 00:13:35.291 fused_ordering(764) 00:13:35.291 fused_ordering(765) 00:13:35.291 fused_ordering(766) 00:13:35.291 fused_ordering(767) 00:13:35.291 fused_ordering(768) 00:13:35.291 fused_ordering(769) 00:13:35.291 fused_ordering(770) 00:13:35.291 fused_ordering(771) 00:13:35.291 fused_ordering(772) 00:13:35.291 fused_ordering(773) 00:13:35.291 fused_ordering(774) 00:13:35.291 fused_ordering(775) 00:13:35.291 fused_ordering(776) 00:13:35.291 fused_ordering(777) 00:13:35.291 fused_ordering(778) 00:13:35.291 fused_ordering(779) 00:13:35.291 fused_ordering(780) 00:13:35.291 fused_ordering(781) 00:13:35.291 fused_ordering(782) 00:13:35.291 fused_ordering(783) 00:13:35.291 fused_ordering(784) 00:13:35.291 fused_ordering(785) 00:13:35.291 fused_ordering(786) 00:13:35.291 fused_ordering(787) 00:13:35.291 fused_ordering(788) 00:13:35.291 fused_ordering(789) 00:13:35.291 fused_ordering(790) 00:13:35.291 fused_ordering(791) 00:13:35.291 fused_ordering(792) 00:13:35.291 fused_ordering(793) 00:13:35.291 fused_ordering(794) 00:13:35.291 fused_ordering(795) 00:13:35.291 fused_ordering(796) 00:13:35.291 fused_ordering(797) 00:13:35.291 fused_ordering(798) 00:13:35.291 fused_ordering(799) 00:13:35.291 fused_ordering(800) 00:13:35.291 fused_ordering(801) 00:13:35.291 fused_ordering(802) 00:13:35.291 fused_ordering(803) 00:13:35.291 fused_ordering(804) 00:13:35.291 fused_ordering(805) 00:13:35.291 fused_ordering(806) 00:13:35.291 fused_ordering(807) 00:13:35.291 fused_ordering(808) 00:13:35.291 fused_ordering(809) 00:13:35.291 fused_ordering(810) 00:13:35.291 fused_ordering(811) 00:13:35.291 fused_ordering(812) 00:13:35.291 fused_ordering(813) 00:13:35.291 fused_ordering(814) 00:13:35.291 fused_ordering(815) 00:13:35.291 fused_ordering(816) 00:13:35.291 fused_ordering(817) 00:13:35.291 fused_ordering(818) 00:13:35.291 fused_ordering(819) 00:13:35.291 fused_ordering(820) 00:13:36.225 fused_ordering(821) 00:13:36.225 fused_ordering(822) 00:13:36.225 fused_ordering(823) 00:13:36.225 fused_ordering(824) 00:13:36.225 fused_ordering(825) 00:13:36.225 fused_ordering(826) 00:13:36.225 fused_ordering(827) 00:13:36.225 fused_ordering(828) 00:13:36.225 fused_ordering(829) 00:13:36.225 fused_ordering(830) 00:13:36.225 fused_ordering(831) 00:13:36.225 fused_ordering(832) 00:13:36.225 fused_ordering(833) 00:13:36.225 fused_ordering(834) 00:13:36.225 fused_ordering(835) 00:13:36.225 fused_ordering(836) 00:13:36.225 fused_ordering(837) 00:13:36.225 fused_ordering(838) 00:13:36.225 fused_ordering(839) 00:13:36.225 fused_ordering(840) 00:13:36.225 fused_ordering(841) 00:13:36.225 fused_ordering(842) 00:13:36.225 fused_ordering(843) 00:13:36.225 fused_ordering(844) 00:13:36.225 fused_ordering(845) 00:13:36.225 fused_ordering(846) 00:13:36.225 fused_ordering(847) 00:13:36.225 fused_ordering(848) 00:13:36.225 fused_ordering(849) 00:13:36.225 fused_ordering(850) 00:13:36.225 fused_ordering(851) 00:13:36.225 fused_ordering(852) 00:13:36.225 fused_ordering(853) 00:13:36.225 fused_ordering(854) 00:13:36.225 fused_ordering(855) 00:13:36.225 fused_ordering(856) 00:13:36.225 fused_ordering(857) 00:13:36.225 fused_ordering(858) 00:13:36.225 fused_ordering(859) 00:13:36.225 fused_ordering(860) 00:13:36.225 fused_ordering(861) 00:13:36.225 fused_ordering(862) 00:13:36.225 fused_ordering(863) 00:13:36.225 fused_ordering(864) 00:13:36.225 fused_ordering(865) 00:13:36.225 fused_ordering(866) 00:13:36.225 fused_ordering(867) 00:13:36.225 fused_ordering(868) 00:13:36.225 fused_ordering(869) 00:13:36.225 fused_ordering(870) 00:13:36.225 fused_ordering(871) 00:13:36.225 fused_ordering(872) 00:13:36.225 fused_ordering(873) 00:13:36.225 fused_ordering(874) 00:13:36.225 fused_ordering(875) 00:13:36.225 fused_ordering(876) 00:13:36.225 fused_ordering(877) 00:13:36.225 fused_ordering(878) 00:13:36.225 fused_ordering(879) 00:13:36.225 fused_ordering(880) 00:13:36.225 fused_ordering(881) 00:13:36.225 fused_ordering(882) 00:13:36.225 fused_ordering(883) 00:13:36.225 fused_ordering(884) 00:13:36.225 fused_ordering(885) 00:13:36.225 fused_ordering(886) 00:13:36.225 fused_ordering(887) 00:13:36.225 fused_ordering(888) 00:13:36.225 fused_ordering(889) 00:13:36.225 fused_ordering(890) 00:13:36.225 fused_ordering(891) 00:13:36.225 fused_ordering(892) 00:13:36.225 fused_ordering(893) 00:13:36.225 fused_ordering(894) 00:13:36.225 fused_ordering(895) 00:13:36.225 fused_ordering(896) 00:13:36.225 fused_ordering(897) 00:13:36.225 fused_ordering(898) 00:13:36.225 fused_ordering(899) 00:13:36.225 fused_ordering(900) 00:13:36.225 fused_ordering(901) 00:13:36.225 fused_ordering(902) 00:13:36.225 fused_ordering(903) 00:13:36.225 fused_ordering(904) 00:13:36.225 fused_ordering(905) 00:13:36.225 fused_ordering(906) 00:13:36.225 fused_ordering(907) 00:13:36.225 fused_ordering(908) 00:13:36.225 fused_ordering(909) 00:13:36.225 fused_ordering(910) 00:13:36.225 fused_ordering(911) 00:13:36.225 fused_ordering(912) 00:13:36.225 fused_ordering(913) 00:13:36.225 fused_ordering(914) 00:13:36.225 fused_ordering(915) 00:13:36.225 fused_ordering(916) 00:13:36.225 fused_ordering(917) 00:13:36.225 fused_ordering(918) 00:13:36.225 fused_ordering(919) 00:13:36.225 fused_ordering(920) 00:13:36.225 fused_ordering(921) 00:13:36.225 fused_ordering(922) 00:13:36.225 fused_ordering(923) 00:13:36.225 fused_ordering(924) 00:13:36.225 fused_ordering(925) 00:13:36.225 fused_ordering(926) 00:13:36.225 fused_ordering(927) 00:13:36.225 fused_ordering(928) 00:13:36.225 fused_ordering(929) 00:13:36.225 fused_ordering(930) 00:13:36.225 fused_ordering(931) 00:13:36.225 fused_ordering(932) 00:13:36.225 fused_ordering(933) 00:13:36.225 fused_ordering(934) 00:13:36.225 fused_ordering(935) 00:13:36.225 fused_ordering(936) 00:13:36.225 fused_ordering(937) 00:13:36.225 fused_ordering(938) 00:13:36.225 fused_ordering(939) 00:13:36.225 fused_ordering(940) 00:13:36.225 fused_ordering(941) 00:13:36.225 fused_ordering(942) 00:13:36.225 fused_ordering(943) 00:13:36.225 fused_ordering(944) 00:13:36.225 fused_ordering(945) 00:13:36.225 fused_ordering(946) 00:13:36.225 fused_ordering(947) 00:13:36.225 fused_ordering(948) 00:13:36.225 fused_ordering(949) 00:13:36.225 fused_ordering(950) 00:13:36.225 fused_ordering(951) 00:13:36.225 fused_ordering(952) 00:13:36.225 fused_ordering(953) 00:13:36.225 fused_ordering(954) 00:13:36.225 fused_ordering(955) 00:13:36.225 fused_ordering(956) 00:13:36.225 fused_ordering(957) 00:13:36.225 fused_ordering(958) 00:13:36.225 fused_ordering(959) 00:13:36.225 fused_ordering(960) 00:13:36.225 fused_ordering(961) 00:13:36.225 fused_ordering(962) 00:13:36.225 fused_ordering(963) 00:13:36.225 fused_ordering(964) 00:13:36.225 fused_ordering(965) 00:13:36.225 fused_ordering(966) 00:13:36.225 fused_ordering(967) 00:13:36.225 fused_ordering(968) 00:13:36.225 fused_ordering(969) 00:13:36.225 fused_ordering(970) 00:13:36.225 fused_ordering(971) 00:13:36.225 fused_ordering(972) 00:13:36.225 fused_ordering(973) 00:13:36.225 fused_ordering(974) 00:13:36.225 fused_ordering(975) 00:13:36.225 fused_ordering(976) 00:13:36.225 fused_ordering(977) 00:13:36.225 fused_ordering(978) 00:13:36.225 fused_ordering(979) 00:13:36.225 fused_ordering(980) 00:13:36.225 fused_ordering(981) 00:13:36.225 fused_ordering(982) 00:13:36.225 fused_ordering(983) 00:13:36.225 fused_ordering(984) 00:13:36.225 fused_ordering(985) 00:13:36.225 fused_ordering(986) 00:13:36.225 fused_ordering(987) 00:13:36.225 fused_ordering(988) 00:13:36.225 fused_ordering(989) 00:13:36.225 fused_ordering(990) 00:13:36.225 fused_ordering(991) 00:13:36.225 fused_ordering(992) 00:13:36.225 fused_ordering(993) 00:13:36.225 fused_ordering(994) 00:13:36.225 fused_ordering(995) 00:13:36.225 fused_ordering(996) 00:13:36.225 fused_ordering(997) 00:13:36.225 fused_ordering(998) 00:13:36.225 fused_ordering(999) 00:13:36.225 fused_ordering(1000) 00:13:36.225 fused_ordering(1001) 00:13:36.225 fused_ordering(1002) 00:13:36.225 fused_ordering(1003) 00:13:36.225 fused_ordering(1004) 00:13:36.225 fused_ordering(1005) 00:13:36.225 fused_ordering(1006) 00:13:36.225 fused_ordering(1007) 00:13:36.225 fused_ordering(1008) 00:13:36.225 fused_ordering(1009) 00:13:36.225 fused_ordering(1010) 00:13:36.225 fused_ordering(1011) 00:13:36.225 fused_ordering(1012) 00:13:36.225 fused_ordering(1013) 00:13:36.225 fused_ordering(1014) 00:13:36.226 fused_ordering(1015) 00:13:36.226 fused_ordering(1016) 00:13:36.226 fused_ordering(1017) 00:13:36.226 fused_ordering(1018) 00:13:36.226 fused_ordering(1019) 00:13:36.226 fused_ordering(1020) 00:13:36.226 fused_ordering(1021) 00:13:36.226 fused_ordering(1022) 00:13:36.226 fused_ordering(1023) 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.226 rmmod nvme_tcp 00:13:36.226 rmmod nvme_fabrics 00:13:36.226 rmmod nvme_keyring 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1055228 ']' 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1055228 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1055228 ']' 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1055228 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055228 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055228' 00:13:36.226 killing process with pid 1055228 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1055228 00:13:36.226 15:25:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1055228 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.484 15:25:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.386 15:25:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.386 00:13:38.386 real 0m7.711s 00:13:38.386 user 0m5.283s 00:13:38.386 sys 0m3.542s 00:13:38.386 15:25:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.386 15:25:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:38.386 ************************************ 00:13:38.386 END TEST nvmf_fused_ordering 00:13:38.386 ************************************ 00:13:38.645 15:25:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:38.645 15:25:09 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:38.645 15:25:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:38.645 15:25:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.645 15:25:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.645 ************************************ 00:13:38.645 START TEST nvmf_delete_subsystem 00:13:38.645 ************************************ 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:38.645 * Looking for test storage... 00:13:38.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.645 15:25:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:40.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:40.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:40.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:40.548 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:40.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.549 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:13:40.807 00:13:40.807 --- 10.0.0.2 ping statistics --- 00:13:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.807 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:13:40.807 00:13:40.807 --- 10.0.0.1 ping statistics --- 00:13:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.807 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1057549 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1057549 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1057549 ']' 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.807 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.807 [2024-07-13 15:25:11.470924] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:40.807 [2024-07-13 15:25:11.470999] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.807 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.807 [2024-07-13 15:25:11.509076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:40.807 [2024-07-13 15:25:11.539694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:41.065 [2024-07-13 15:25:11.629978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.065 [2024-07-13 15:25:11.630044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.065 [2024-07-13 15:25:11.630070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.065 [2024-07-13 15:25:11.630100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.065 [2024-07-13 15:25:11.630121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.065 [2024-07-13 15:25:11.630201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.065 [2024-07-13 15:25:11.630210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.065 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.065 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:41.065 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.065 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 [2024-07-13 15:25:11.776136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 [2024-07-13 15:25:11.792377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 NULL1 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 Delay0 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1057643 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:41.066 15:25:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:41.324 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.324 [2024-07-13 15:25:11.867100] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:43.222 15:25:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.222 15:25:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.222 15:25:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 starting I/O failed: -6 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 [2024-07-13 15:25:13.997764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08cc00cfe0 is same with the state(5) to be set 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Read completed with error (sct=0, sc=8) 00:13:43.480 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 starting I/O failed: -6 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 [2024-07-13 15:25:13.998792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dad300 is same with the state(5) to be set 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Read completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:43.481 Write completed with error (sct=0, sc=8) 00:13:44.415 [2024-07-13 15:25:14.963969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc4b40 is same with the state(5) to be set 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 [2024-07-13 15:25:14.999821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da7100 is same with the state(5) to be set 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 [2024-07-13 15:25:15.000244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da6d40 is same with the state(5) to be set 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 [2024-07-13 15:25:15.001988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08cc000c00 is same with the state(5) to be set 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Write completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 Read completed with error (sct=0, sc=8) 00:13:44.415 [2024-07-13 15:25:15.002633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08cc00d2f0 is same with the state(5) to be set 00:13:44.415 Initializing NVMe Controllers 00:13:44.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.415 Controller IO queue size 128, less than required. 00:13:44.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:44.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:44.415 Initialization complete. Launching workers. 00:13:44.415 ======================================================== 00:13:44.415 Latency(us) 00:13:44.415 Device Information : IOPS MiB/s Average min max 00:13:44.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.30 0.08 950508.68 402.32 2001080.23 00:13:44.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.28 0.08 906397.80 451.10 1011185.80 00:13:44.415 ======================================================== 00:13:44.415 Total : 327.58 0.16 928252.73 402.32 2001080.23 00:13:44.415 00:13:44.415 [2024-07-13 15:25:15.003103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4b40 (9): Bad file descriptor 00:13:44.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:44.415 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.415 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:44.415 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1057643 00:13:44.415 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1057643 00:13:44.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1057643) - No such process 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1057643 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1057643 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:44.981 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1057643 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.982 [2024-07-13 15:25:15.526247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1058049 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:44.982 15:25:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.982 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.982 [2024-07-13 15:25:15.588442] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:45.547 15:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:45.547 15:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:45.547 15:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:45.804 15:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:45.804 15:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:45.804 15:25:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:46.369 15:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:46.369 15:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:46.369 15:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:46.953 15:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:46.953 15:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:46.953 15:25:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:47.516 15:25:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:47.516 15:25:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:47.516 15:25:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:48.080 15:25:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.080 15:25:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:48.080 15:25:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:48.080 Initializing NVMe Controllers 00:13:48.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.080 Controller IO queue size 128, less than required. 00:13:48.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:48.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:48.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:48.080 Initialization complete. Launching workers. 00:13:48.080 ======================================================== 00:13:48.080 Latency(us) 00:13:48.080 Device Information : IOPS MiB/s Average min max 00:13:48.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003895.01 1000202.45 1012876.08 00:13:48.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005243.80 1000364.09 1012856.76 00:13:48.080 ======================================================== 00:13:48.080 Total : 256.00 0.12 1004569.40 1000202.45 1012876.08 00:13:48.080 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1058049 00:13:48.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1058049) - No such process 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1058049 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.338 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.338 rmmod nvme_tcp 00:13:48.338 rmmod nvme_fabrics 00:13:48.656 rmmod nvme_keyring 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1057549 ']' 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1057549 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1057549 ']' 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1057549 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1057549 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1057549' 00:13:48.656 killing process with pid 1057549 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1057549 00:13:48.656 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1057549 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.943 15:25:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.851 15:25:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.851 00:13:50.851 real 0m12.224s 00:13:50.851 user 0m27.578s 00:13:50.851 sys 0m3.039s 00:13:50.851 15:25:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.851 15:25:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:50.851 ************************************ 00:13:50.851 END TEST nvmf_delete_subsystem 00:13:50.851 ************************************ 00:13:50.851 15:25:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.851 15:25:21 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:50.851 15:25:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.851 15:25:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.851 15:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.851 ************************************ 00:13:50.851 START TEST nvmf_ns_masking 00:13:50.851 ************************************ 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:50.851 * Looking for test storage... 00:13:50.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.851 15:25:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=009da743-f4ea-482c-b584-d272c52aff2a 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4fc4af17-eb8b-4f31-b03c-fd852ee7a9c4 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7489d873-6c10-427e-9ccf-279a6be976d4 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.852 15:25:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:52.757 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:52.757 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:52.757 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:52.757 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.757 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:13:53.015 00:13:53.015 --- 10.0.0.2 ping statistics --- 00:13:53.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.015 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:13:53.015 00:13:53.015 --- 10.0.0.1 ping statistics --- 00:13:53.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.015 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1060391 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1060391 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1060391 ']' 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.015 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.015 [2024-07-13 15:25:23.682559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:13:53.015 [2024-07-13 15:25:23.682642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.015 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.015 [2024-07-13 15:25:23.720158] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:53.015 [2024-07-13 15:25:23.750216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.272 [2024-07-13 15:25:23.840937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.272 [2024-07-13 15:25:23.840995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.272 [2024-07-13 15:25:23.841021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.272 [2024-07-13 15:25:23.841042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.272 [2024-07-13 15:25:23.841061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.273 [2024-07-13 15:25:23.841110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.273 15:25:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:53.530 [2024-07-13 15:25:24.206140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.530 15:25:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:53.530 15:25:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:53.530 15:25:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:53.787 Malloc1 00:13:53.787 15:25:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:54.352 Malloc2 00:13:54.352 15:25:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.610 15:25:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:54.868 15:25:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.868 [2024-07-13 15:25:25.620852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7489d873-6c10-427e-9ccf-279a6be976d4 -a 10.0.0.2 -s 4420 -i 4 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:55.125 15:25:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.651 [ 0]:0x1 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.651 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eeb4e24c63fb4102bd7cc23e941afe86 00:13:57.652 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eeb4e24c63fb4102bd7cc23e941afe86 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.652 15:25:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.652 [ 0]:0x1 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eeb4e24c63fb4102bd7cc23e941afe86 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eeb4e24c63fb4102bd7cc23e941afe86 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:57.652 [ 1]:0x2 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.652 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.910 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:58.168 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:58.168 15:25:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7489d873-6c10-427e-9ccf-279a6be976d4 -a 10.0.0.2 -s 4420 -i 4 00:13:58.426 15:25:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:58.426 15:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:58.426 15:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.426 15:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:58.426 15:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:58.426 15:25:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:00.322 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:00.322 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:00.322 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:00.322 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:00.322 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:00.323 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:00.323 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:00.323 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.580 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.581 [ 0]:0x2 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.581 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.838 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:00.838 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.838 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.838 [ 0]:0x1 00:14:00.838 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.838 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eeb4e24c63fb4102bd7cc23e941afe86 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eeb4e24c63fb4102bd7cc23e941afe86 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.095 [ 1]:0x2 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.095 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.353 [ 0]:0x2 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.353 15:25:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.353 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:14:01.353 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.353 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:01.353 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.353 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.610 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:01.610 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7489d873-6c10-427e-9ccf-279a6be976d4 -a 10.0.0.2 -s 4420 -i 4 00:14:01.868 15:25:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:01.868 15:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:01.868 15:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.868 15:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:01.868 15:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:01.868 15:25:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.391 [ 0]:0x1 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=eeb4e24c63fb4102bd7cc23e941afe86 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ eeb4e24c63fb4102bd7cc23e941afe86 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.391 [ 1]:0x2 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.391 15:25:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.649 [ 0]:0x2 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:04.649 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:04.907 [2024-07-13 15:25:35.514421] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:04.907 request: 00:14:04.907 { 00:14:04.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.907 "nsid": 2, 00:14:04.907 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.907 "method": "nvmf_ns_remove_host", 00:14:04.907 "req_id": 1 00:14:04.907 } 00:14:04.907 Got JSON-RPC error response 00:14:04.907 response: 00:14:04.907 { 00:14:04.907 "code": -32602, 00:14:04.907 "message": "Invalid parameters" 00:14:04.907 } 00:14:04.907 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:04.907 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.907 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.907 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.907 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:04.907 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.908 [ 0]:0x2 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.908 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=57a01d94cc814c36bb40628a446ccb49 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 57a01d94cc814c36bb40628a446ccb49 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1062003 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1062003 /var/tmp/host.sock 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1062003 ']' 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:05.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.185 15:25:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:05.185 [2024-07-13 15:25:35.847541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:05.185 [2024-07-13 15:25:35.847623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062003 ] 00:14:05.185 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.185 [2024-07-13 15:25:35.879432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:05.185 [2024-07-13 15:25:35.911826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.443 [2024-07-13 15:25:36.005396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.701 15:25:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.701 15:25:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:05.701 15:25:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.959 15:25:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.216 15:25:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 009da743-f4ea-482c-b584-d272c52aff2a 00:14:06.216 15:25:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:06.216 15:25:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 009DA743F4EA482CB584D272C52AFF2A -i 00:14:06.474 15:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4fc4af17-eb8b-4f31-b03c-fd852ee7a9c4 00:14:06.474 15:25:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:06.474 15:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4FC4AF17EB8B4F31B03CFD852EE7A9C4 -i 00:14:06.732 15:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.989 15:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:07.247 15:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:07.247 15:25:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:07.812 nvme0n1 00:14:07.812 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:07.812 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:08.070 nvme1n2 00:14:08.070 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:08.070 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:08.070 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:08.070 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:08.070 15:25:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:08.329 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:08.329 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:08.329 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:08.329 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:08.591 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 009da743-f4ea-482c-b584-d272c52aff2a == \0\0\9\d\a\7\4\3\-\f\4\e\a\-\4\8\2\c\-\b\5\8\4\-\d\2\7\2\c\5\2\a\f\f\2\a ]] 00:14:08.591 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:08.591 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:08.591 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4fc4af17-eb8b-4f31-b03c-fd852ee7a9c4 == \4\f\c\4\a\f\1\7\-\e\b\8\b\-\4\f\3\1\-\b\0\3\c\-\f\d\8\5\2\e\e\7\a\9\c\4 ]] 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1062003 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1062003 ']' 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1062003 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1062003 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1062003' 00:14:08.849 killing process with pid 1062003 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1062003 00:14:08.849 15:25:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1062003 00:14:09.412 15:25:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.412 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.412 rmmod nvme_tcp 00:14:09.669 rmmod nvme_fabrics 00:14:09.670 rmmod nvme_keyring 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1060391 ']' 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1060391 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1060391 ']' 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1060391 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060391 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060391' 00:14:09.670 killing process with pid 1060391 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1060391 00:14:09.670 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1060391 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.927 15:25:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.827 15:25:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.827 00:14:11.827 real 0m21.111s 00:14:11.827 user 0m27.322s 00:14:11.827 sys 0m4.145s 00:14:11.827 15:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.827 15:25:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.827 ************************************ 00:14:11.827 END TEST nvmf_ns_masking 00:14:11.827 ************************************ 00:14:12.086 15:25:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:12.086 15:25:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:12.086 15:25:42 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:12.086 15:25:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.086 15:25:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.086 15:25:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.086 ************************************ 00:14:12.086 START TEST nvmf_nvme_cli 00:14:12.086 ************************************ 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:12.086 * Looking for test storage... 00:14:12.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.086 15:25:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:13.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:13.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:13.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:13.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.992 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:14:14.251 00:14:14.251 --- 10.0.0.2 ping statistics --- 00:14:14.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.251 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:14:14.251 00:14:14.251 --- 10.0.0.1 ping statistics --- 00:14:14.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.251 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1064497 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1064497 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1064497 ']' 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.251 15:25:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.251 [2024-07-13 15:25:44.885090] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:14.251 [2024-07-13 15:25:44.885186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.251 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.251 [2024-07-13 15:25:44.932822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:14.251 [2024-07-13 15:25:44.964284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.509 [2024-07-13 15:25:45.064129] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.509 [2024-07-13 15:25:45.064198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.509 [2024-07-13 15:25:45.064216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.509 [2024-07-13 15:25:45.064229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.509 [2024-07-13 15:25:45.064241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.509 [2024-07-13 15:25:45.064299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.509 [2024-07-13 15:25:45.064352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.509 [2024-07-13 15:25:45.064387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.509 [2024-07-13 15:25:45.064389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.509 [2024-07-13 15:25:45.226798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.509 Malloc0 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.509 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 Malloc1 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 [2024-07-13 15:25:45.313295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:14.769 00:14:14.769 Discovery Log Number of Records 2, Generation counter 2 00:14:14.769 =====Discovery Log Entry 0====== 00:14:14.769 trtype: tcp 00:14:14.769 adrfam: ipv4 00:14:14.769 subtype: current discovery subsystem 00:14:14.769 treq: not required 00:14:14.769 portid: 0 00:14:14.769 trsvcid: 4420 00:14:14.769 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:14.769 traddr: 10.0.0.2 00:14:14.769 eflags: explicit discovery connections, duplicate discovery information 00:14:14.769 sectype: none 00:14:14.769 =====Discovery Log Entry 1====== 00:14:14.769 trtype: tcp 00:14:14.769 adrfam: ipv4 00:14:14.769 subtype: nvme subsystem 00:14:14.769 treq: not required 00:14:14.769 portid: 0 00:14:14.769 trsvcid: 4420 00:14:14.769 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:14.769 traddr: 10.0.0.2 00:14:14.769 eflags: none 00:14:14.769 sectype: none 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:14.769 15:25:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.336 15:25:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:15.336 15:25:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:15.336 15:25:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.336 15:25:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:15.336 15:25:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:15.336 15:25:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:17.862 /dev/nvme0n1 ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:17.862 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.121 rmmod nvme_tcp 00:14:18.121 rmmod nvme_fabrics 00:14:18.121 rmmod nvme_keyring 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1064497 ']' 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1064497 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1064497 ']' 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1064497 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1064497 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1064497' 00:14:18.121 killing process with pid 1064497 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1064497 00:14:18.121 15:25:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1064497 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.380 15:25:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.913 15:25:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.913 00:14:20.913 real 0m8.444s 00:14:20.913 user 0m16.281s 00:14:20.913 sys 0m2.217s 00:14:20.913 15:25:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.913 15:25:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:20.913 ************************************ 00:14:20.913 END TEST nvmf_nvme_cli 00:14:20.913 ************************************ 00:14:20.913 15:25:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:20.913 15:25:51 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:20.913 15:25:51 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:20.913 15:25:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.913 15:25:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.913 15:25:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.913 ************************************ 00:14:20.913 START TEST nvmf_vfio_user 00:14:20.913 ************************************ 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:20.913 * Looking for test storage... 00:14:20.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:20.913 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1065337 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1065337' 00:14:20.914 Process pid: 1065337 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1065337 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1065337 ']' 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:20.914 [2024-07-13 15:25:51.251170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:20.914 [2024-07-13 15:25:51.251266] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.914 [2024-07-13 15:25:51.284396] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:20.914 [2024-07-13 15:25:51.310075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.914 [2024-07-13 15:25:51.396809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.914 [2024-07-13 15:25:51.396874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.914 [2024-07-13 15:25:51.396890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.914 [2024-07-13 15:25:51.396900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.914 [2024-07-13 15:25:51.396926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.914 [2024-07-13 15:25:51.396978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.914 [2024-07-13 15:25:51.397046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.914 [2024-07-13 15:25:51.397071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.914 [2024-07-13 15:25:51.397073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:20.914 15:25:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:21.881 15:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:22.139 15:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:22.139 15:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:22.139 15:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:22.139 15:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:22.139 15:25:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:22.396 Malloc1 00:14:22.396 15:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:22.653 15:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:22.910 15:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:23.167 15:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:23.167 15:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:23.167 15:25:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:23.425 Malloc2 00:14:23.425 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:23.683 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:23.941 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:24.199 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:24.199 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:24.199 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:24.199 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:24.199 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:24.199 15:25:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:24.199 [2024-07-13 15:25:54.932682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:24.199 [2024-07-13 15:25:54.932719] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065844 ] 00:14:24.199 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.199 [2024-07-13 15:25:54.949612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.459 [2024-07-13 15:25:54.967496] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:24.459 [2024-07-13 15:25:54.975383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:24.459 [2024-07-13 15:25:54.975415] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1849d55000 00:14:24.459 [2024-07-13 15:25:54.976381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.977377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.978382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.979383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.980387] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.981394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.982396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.983401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:24.459 [2024-07-13 15:25:54.984413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:24.459 [2024-07-13 15:25:54.984433] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1848b17000 00:14:24.459 [2024-07-13 15:25:54.985711] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:24.459 [2024-07-13 15:25:55.006766] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:24.459 [2024-07-13 15:25:55.006805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:24.459 [2024-07-13 15:25:55.009547] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:24.459 [2024-07-13 15:25:55.009600] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:24.459 [2024-07-13 15:25:55.009698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:24.459 [2024-07-13 15:25:55.009733] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:24.459 [2024-07-13 15:25:55.009743] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:24.459 [2024-07-13 15:25:55.010539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:24.459 [2024-07-13 15:25:55.010561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:24.459 [2024-07-13 15:25:55.010573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:24.459 [2024-07-13 15:25:55.011546] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:24.459 [2024-07-13 15:25:55.011565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:24.459 [2024-07-13 15:25:55.011579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:24.459 [2024-07-13 15:25:55.012564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:24.459 [2024-07-13 15:25:55.012584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:24.459 [2024-07-13 15:25:55.013554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:24.459 [2024-07-13 15:25:55.013573] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:24.459 [2024-07-13 15:25:55.013581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:24.459 [2024-07-13 15:25:55.013593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:24.459 [2024-07-13 15:25:55.013706] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:24.459 [2024-07-13 15:25:55.013715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:24.459 [2024-07-13 15:25:55.013724] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:24.459 [2024-07-13 15:25:55.014565] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:24.459 [2024-07-13 15:25:55.015567] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:24.459 [2024-07-13 15:25:55.016571] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:24.459 [2024-07-13 15:25:55.017566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.459 [2024-07-13 15:25:55.017660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:24.460 [2024-07-13 15:25:55.018580] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:24.460 [2024-07-13 15:25:55.018599] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:24.460 [2024-07-13 15:25:55.018608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.018631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:24.460 [2024-07-13 15:25:55.018645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.018675] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:24.460 [2024-07-13 15:25:55.018685] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.460 [2024-07-13 15:25:55.018708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.018776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.018795] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:24.460 [2024-07-13 15:25:55.018806] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:24.460 [2024-07-13 15:25:55.018814] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:24.460 [2024-07-13 15:25:55.018822] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:24.460 [2024-07-13 15:25:55.018830] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:24.460 [2024-07-13 15:25:55.018838] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:24.460 [2024-07-13 15:25:55.018860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.018881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.018898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.018932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.018957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.460 [2024-07-13 15:25:55.018971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.460 [2024-07-13 15:25:55.018983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.460 [2024-07-13 15:25:55.018995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.460 [2024-07-13 15:25:55.019004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019059] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:24.460 [2024-07-13 15:25:55.019068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019225] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:24.460 [2024-07-13 15:25:55.019248] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:24.460 [2024-07-13 15:25:55.019257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019292] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:24.460 [2024-07-13 15:25:55.019313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019345] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:24.460 [2024-07-13 15:25:55.019354] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.460 [2024-07-13 15:25:55.019363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019437] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:24.460 [2024-07-13 15:25:55.019445] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.460 [2024-07-13 15:25:55.019454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019543] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:24.460 [2024-07-13 15:25:55.019551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:24.460 [2024-07-13 15:25:55.019560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:24.460 [2024-07-13 15:25:55.019588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019720] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:24.460 [2024-07-13 15:25:55.019730] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:24.460 [2024-07-13 15:25:55.019736] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:24.460 [2024-07-13 15:25:55.019742] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:24.460 [2024-07-13 15:25:55.019750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:24.460 [2024-07-13 15:25:55.019761] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:24.460 [2024-07-13 15:25:55.019768] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:24.460 [2024-07-13 15:25:55.019777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019787] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:24.460 [2024-07-13 15:25:55.019794] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:24.460 [2024-07-13 15:25:55.019803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019814] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:24.460 [2024-07-13 15:25:55.019822] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:24.460 [2024-07-13 15:25:55.019830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:24.460 [2024-07-13 15:25:55.019841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:24.460 [2024-07-13 15:25:55.019916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:24.461 ===================================================== 00:14:24.461 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:24.461 ===================================================== 00:14:24.461 Controller Capabilities/Features 00:14:24.461 ================================ 00:14:24.461 Vendor ID: 4e58 00:14:24.461 Subsystem Vendor ID: 4e58 00:14:24.461 Serial Number: SPDK1 00:14:24.461 Model Number: SPDK bdev Controller 00:14:24.461 Firmware Version: 24.09 00:14:24.461 Recommended Arb Burst: 6 00:14:24.461 IEEE OUI Identifier: 8d 6b 50 00:14:24.461 Multi-path I/O 00:14:24.461 May have multiple subsystem ports: Yes 00:14:24.461 May have multiple controllers: Yes 00:14:24.461 Associated with SR-IOV VF: No 00:14:24.461 Max Data Transfer Size: 131072 00:14:24.461 Max Number of Namespaces: 32 00:14:24.461 Max Number of I/O Queues: 127 00:14:24.461 NVMe Specification Version (VS): 1.3 00:14:24.461 NVMe Specification Version (Identify): 1.3 00:14:24.461 Maximum Queue Entries: 256 00:14:24.461 Contiguous Queues Required: Yes 00:14:24.461 Arbitration Mechanisms Supported 00:14:24.461 Weighted Round Robin: Not Supported 00:14:24.461 Vendor Specific: Not Supported 00:14:24.461 Reset Timeout: 15000 ms 00:14:24.461 Doorbell Stride: 4 bytes 00:14:24.461 NVM Subsystem Reset: Not Supported 00:14:24.461 Command Sets Supported 00:14:24.461 NVM Command Set: Supported 00:14:24.461 Boot Partition: Not Supported 00:14:24.461 Memory Page Size Minimum: 4096 bytes 00:14:24.461 Memory Page Size Maximum: 4096 bytes 00:14:24.461 Persistent Memory Region: Not Supported 00:14:24.461 Optional Asynchronous Events Supported 00:14:24.461 Namespace Attribute Notices: Supported 00:14:24.461 Firmware Activation Notices: Not Supported 00:14:24.461 ANA Change Notices: Not Supported 00:14:24.461 PLE Aggregate Log Change Notices: Not Supported 00:14:24.461 LBA Status Info Alert Notices: Not Supported 00:14:24.461 EGE Aggregate Log Change Notices: Not Supported 00:14:24.461 Normal NVM Subsystem Shutdown event: Not Supported 00:14:24.461 Zone Descriptor Change Notices: Not Supported 00:14:24.461 Discovery Log Change Notices: Not Supported 00:14:24.461 Controller Attributes 00:14:24.461 128-bit Host Identifier: Supported 00:14:24.461 Non-Operational Permissive Mode: Not Supported 00:14:24.461 NVM Sets: Not Supported 00:14:24.461 Read Recovery Levels: Not Supported 00:14:24.461 Endurance Groups: Not Supported 00:14:24.461 Predictable Latency Mode: Not Supported 00:14:24.461 Traffic Based Keep ALive: Not Supported 00:14:24.461 Namespace Granularity: Not Supported 00:14:24.461 SQ Associations: Not Supported 00:14:24.461 UUID List: Not Supported 00:14:24.461 Multi-Domain Subsystem: Not Supported 00:14:24.461 Fixed Capacity Management: Not Supported 00:14:24.461 Variable Capacity Management: Not Supported 00:14:24.461 Delete Endurance Group: Not Supported 00:14:24.461 Delete NVM Set: Not Supported 00:14:24.461 Extended LBA Formats Supported: Not Supported 00:14:24.461 Flexible Data Placement Supported: Not Supported 00:14:24.461 00:14:24.461 Controller Memory Buffer Support 00:14:24.461 ================================ 00:14:24.461 Supported: No 00:14:24.461 00:14:24.461 Persistent Memory Region Support 00:14:24.461 ================================ 00:14:24.461 Supported: No 00:14:24.461 00:14:24.461 Admin Command Set Attributes 00:14:24.461 ============================ 00:14:24.461 Security Send/Receive: Not Supported 00:14:24.461 Format NVM: Not Supported 00:14:24.461 Firmware Activate/Download: Not Supported 00:14:24.461 Namespace Management: Not Supported 00:14:24.461 Device Self-Test: Not Supported 00:14:24.461 Directives: Not Supported 00:14:24.461 NVMe-MI: Not Supported 00:14:24.461 Virtualization Management: Not Supported 00:14:24.461 Doorbell Buffer Config: Not Supported 00:14:24.461 Get LBA Status Capability: Not Supported 00:14:24.461 Command & Feature Lockdown Capability: Not Supported 00:14:24.461 Abort Command Limit: 4 00:14:24.461 Async Event Request Limit: 4 00:14:24.461 Number of Firmware Slots: N/A 00:14:24.461 Firmware Slot 1 Read-Only: N/A 00:14:24.461 Firmware Activation Without Reset: N/A 00:14:24.461 Multiple Update Detection Support: N/A 00:14:24.461 Firmware Update Granularity: No Information Provided 00:14:24.461 Per-Namespace SMART Log: No 00:14:24.461 Asymmetric Namespace Access Log Page: Not Supported 00:14:24.461 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:24.461 Command Effects Log Page: Supported 00:14:24.461 Get Log Page Extended Data: Supported 00:14:24.461 Telemetry Log Pages: Not Supported 00:14:24.461 Persistent Event Log Pages: Not Supported 00:14:24.461 Supported Log Pages Log Page: May Support 00:14:24.461 Commands Supported & Effects Log Page: Not Supported 00:14:24.461 Feature Identifiers & Effects Log Page:May Support 00:14:24.461 NVMe-MI Commands & Effects Log Page: May Support 00:14:24.461 Data Area 4 for Telemetry Log: Not Supported 00:14:24.461 Error Log Page Entries Supported: 128 00:14:24.461 Keep Alive: Supported 00:14:24.461 Keep Alive Granularity: 10000 ms 00:14:24.461 00:14:24.461 NVM Command Set Attributes 00:14:24.461 ========================== 00:14:24.461 Submission Queue Entry Size 00:14:24.461 Max: 64 00:14:24.461 Min: 64 00:14:24.461 Completion Queue Entry Size 00:14:24.461 Max: 16 00:14:24.461 Min: 16 00:14:24.461 Number of Namespaces: 32 00:14:24.461 Compare Command: Supported 00:14:24.461 Write Uncorrectable Command: Not Supported 00:14:24.461 Dataset Management Command: Supported 00:14:24.461 Write Zeroes Command: Supported 00:14:24.461 Set Features Save Field: Not Supported 00:14:24.461 Reservations: Not Supported 00:14:24.461 Timestamp: Not Supported 00:14:24.461 Copy: Supported 00:14:24.461 Volatile Write Cache: Present 00:14:24.461 Atomic Write Unit (Normal): 1 00:14:24.461 Atomic Write Unit (PFail): 1 00:14:24.461 Atomic Compare & Write Unit: 1 00:14:24.461 Fused Compare & Write: Supported 00:14:24.461 Scatter-Gather List 00:14:24.461 SGL Command Set: Supported (Dword aligned) 00:14:24.461 SGL Keyed: Not Supported 00:14:24.461 SGL Bit Bucket Descriptor: Not Supported 00:14:24.461 SGL Metadata Pointer: Not Supported 00:14:24.461 Oversized SGL: Not Supported 00:14:24.461 SGL Metadata Address: Not Supported 00:14:24.461 SGL Offset: Not Supported 00:14:24.461 Transport SGL Data Block: Not Supported 00:14:24.461 Replay Protected Memory Block: Not Supported 00:14:24.461 00:14:24.461 Firmware Slot Information 00:14:24.461 ========================= 00:14:24.461 Active slot: 1 00:14:24.461 Slot 1 Firmware Revision: 24.09 00:14:24.461 00:14:24.461 00:14:24.461 Commands Supported and Effects 00:14:24.461 ============================== 00:14:24.461 Admin Commands 00:14:24.461 -------------- 00:14:24.461 Get Log Page (02h): Supported 00:14:24.461 Identify (06h): Supported 00:14:24.461 Abort (08h): Supported 00:14:24.461 Set Features (09h): Supported 00:14:24.461 Get Features (0Ah): Supported 00:14:24.461 Asynchronous Event Request (0Ch): Supported 00:14:24.461 Keep Alive (18h): Supported 00:14:24.461 I/O Commands 00:14:24.461 ------------ 00:14:24.461 Flush (00h): Supported LBA-Change 00:14:24.461 Write (01h): Supported LBA-Change 00:14:24.461 Read (02h): Supported 00:14:24.461 Compare (05h): Supported 00:14:24.461 Write Zeroes (08h): Supported LBA-Change 00:14:24.461 Dataset Management (09h): Supported LBA-Change 00:14:24.461 Copy (19h): Supported LBA-Change 00:14:24.461 00:14:24.461 Error Log 00:14:24.461 ========= 00:14:24.461 00:14:24.461 Arbitration 00:14:24.461 =========== 00:14:24.461 Arbitration Burst: 1 00:14:24.461 00:14:24.461 Power Management 00:14:24.461 ================ 00:14:24.461 Number of Power States: 1 00:14:24.461 Current Power State: Power State #0 00:14:24.461 Power State #0: 00:14:24.461 Max Power: 0.00 W 00:14:24.461 Non-Operational State: Operational 00:14:24.461 Entry Latency: Not Reported 00:14:24.461 Exit Latency: Not Reported 00:14:24.461 Relative Read Throughput: 0 00:14:24.461 Relative Read Latency: 0 00:14:24.461 Relative Write Throughput: 0 00:14:24.461 Relative Write Latency: 0 00:14:24.461 Idle Power: Not Reported 00:14:24.461 Active Power: Not Reported 00:14:24.461 Non-Operational Permissive Mode: Not Supported 00:14:24.461 00:14:24.461 Health Information 00:14:24.461 ================== 00:14:24.461 Critical Warnings: 00:14:24.461 Available Spare Space: OK 00:14:24.461 Temperature: OK 00:14:24.461 Device Reliability: OK 00:14:24.461 Read Only: No 00:14:24.461 Volatile Memory Backup: OK 00:14:24.461 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:24.461 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:24.461 Available Spare: 0% 00:14:24.461 Available Sp[2024-07-13 15:25:55.020035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:24.461 [2024-07-13 15:25:55.020052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:24.461 [2024-07-13 15:25:55.020096] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:24.461 [2024-07-13 15:25:55.020115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.461 [2024-07-13 15:25:55.020126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.462 [2024-07-13 15:25:55.020136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.462 [2024-07-13 15:25:55.020161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:24.462 [2024-07-13 15:25:55.022877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:24.462 [2024-07-13 15:25:55.022900] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:24.462 [2024-07-13 15:25:55.023601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.462 [2024-07-13 15:25:55.023679] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:24.462 [2024-07-13 15:25:55.023693] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:24.462 [2024-07-13 15:25:55.024607] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:24.462 [2024-07-13 15:25:55.024630] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:24.462 [2024-07-13 15:25:55.024687] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:24.462 [2024-07-13 15:25:55.027878] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:24.462 are Threshold: 0% 00:14:24.462 Life Percentage Used: 0% 00:14:24.462 Data Units Read: 0 00:14:24.462 Data Units Written: 0 00:14:24.462 Host Read Commands: 0 00:14:24.462 Host Write Commands: 0 00:14:24.462 Controller Busy Time: 0 minutes 00:14:24.462 Power Cycles: 0 00:14:24.462 Power On Hours: 0 hours 00:14:24.462 Unsafe Shutdowns: 0 00:14:24.462 Unrecoverable Media Errors: 0 00:14:24.462 Lifetime Error Log Entries: 0 00:14:24.462 Warning Temperature Time: 0 minutes 00:14:24.462 Critical Temperature Time: 0 minutes 00:14:24.462 00:14:24.462 Number of Queues 00:14:24.462 ================ 00:14:24.462 Number of I/O Submission Queues: 127 00:14:24.462 Number of I/O Completion Queues: 127 00:14:24.462 00:14:24.462 Active Namespaces 00:14:24.462 ================= 00:14:24.462 Namespace ID:1 00:14:24.462 Error Recovery Timeout: Unlimited 00:14:24.462 Command Set Identifier: NVM (00h) 00:14:24.462 Deallocate: Supported 00:14:24.462 Deallocated/Unwritten Error: Not Supported 00:14:24.462 Deallocated Read Value: Unknown 00:14:24.462 Deallocate in Write Zeroes: Not Supported 00:14:24.462 Deallocated Guard Field: 0xFFFF 00:14:24.462 Flush: Supported 00:14:24.462 Reservation: Supported 00:14:24.462 Namespace Sharing Capabilities: Multiple Controllers 00:14:24.462 Size (in LBAs): 131072 (0GiB) 00:14:24.462 Capacity (in LBAs): 131072 (0GiB) 00:14:24.462 Utilization (in LBAs): 131072 (0GiB) 00:14:24.462 NGUID: 7257432BC26649149B34C21E619C9321 00:14:24.462 UUID: 7257432b-c266-4914-9b34-c21e619c9321 00:14:24.462 Thin Provisioning: Not Supported 00:14:24.462 Per-NS Atomic Units: Yes 00:14:24.462 Atomic Boundary Size (Normal): 0 00:14:24.462 Atomic Boundary Size (PFail): 0 00:14:24.462 Atomic Boundary Offset: 0 00:14:24.462 Maximum Single Source Range Length: 65535 00:14:24.462 Maximum Copy Length: 65535 00:14:24.462 Maximum Source Range Count: 1 00:14:24.462 NGUID/EUI64 Never Reused: No 00:14:24.462 Namespace Write Protected: No 00:14:24.462 Number of LBA Formats: 1 00:14:24.462 Current LBA Format: LBA Format #00 00:14:24.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.462 00:14:24.462 15:25:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:24.462 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.720 [2024-07-13 15:25:55.255709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.974 Initializing NVMe Controllers 00:14:29.974 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.974 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:29.974 Initialization complete. Launching workers. 00:14:29.974 ======================================================== 00:14:29.974 Latency(us) 00:14:29.974 Device Information : IOPS MiB/s Average min max 00:14:29.974 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34713.67 135.60 3686.88 1173.19 7400.67 00:14:29.974 ======================================================== 00:14:29.974 Total : 34713.67 135.60 3686.88 1173.19 7400.67 00:14:29.974 00:14:29.974 [2024-07-13 15:26:00.275924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.974 15:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:29.974 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.974 [2024-07-13 15:26:00.509024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:35.243 Initializing NVMe Controllers 00:14:35.243 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:35.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:35.243 Initialization complete. Launching workers. 00:14:35.243 ======================================================== 00:14:35.243 Latency(us) 00:14:35.243 Device Information : IOPS MiB/s Average min max 00:14:35.243 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15986.95 62.45 8011.84 7757.67 15961.36 00:14:35.243 ======================================================== 00:14:35.243 Total : 15986.95 62.45 8011.84 7757.67 15961.36 00:14:35.243 00:14:35.243 [2024-07-13 15:26:05.548690] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:35.243 15:26:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:35.243 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.243 [2024-07-13 15:26:05.758736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.539 [2024-07-13 15:26:10.838237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.539 Initializing NVMe Controllers 00:14:40.539 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:40.539 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:40.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:40.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:40.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:40.539 Initialization complete. Launching workers. 00:14:40.539 Starting thread on core 2 00:14:40.539 Starting thread on core 3 00:14:40.539 Starting thread on core 1 00:14:40.539 15:26:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:40.539 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.539 [2024-07-13 15:26:11.127760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.724 [2024-07-13 15:26:14.852145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.724 Initializing NVMe Controllers 00:14:44.724 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.724 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.724 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:44.724 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:44.724 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:44.724 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:44.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:44.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:44.724 Initialization complete. Launching workers. 00:14:44.724 Starting thread on core 1 with urgent priority queue 00:14:44.724 Starting thread on core 2 with urgent priority queue 00:14:44.724 Starting thread on core 3 with urgent priority queue 00:14:44.724 Starting thread on core 0 with urgent priority queue 00:14:44.724 SPDK bdev Controller (SPDK1 ) core 0: 2599.00 IO/s 38.48 secs/100000 ios 00:14:44.724 SPDK bdev Controller (SPDK1 ) core 1: 2638.33 IO/s 37.90 secs/100000 ios 00:14:44.724 SPDK bdev Controller (SPDK1 ) core 2: 2826.67 IO/s 35.38 secs/100000 ios 00:14:44.724 SPDK bdev Controller (SPDK1 ) core 3: 2800.00 IO/s 35.71 secs/100000 ios 00:14:44.724 ======================================================== 00:14:44.724 00:14:44.724 15:26:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:44.724 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.724 [2024-07-13 15:26:15.157396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.724 Initializing NVMe Controllers 00:14:44.724 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.724 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.724 Namespace ID: 1 size: 0GB 00:14:44.724 Initialization complete. 00:14:44.724 INFO: using host memory buffer for IO 00:14:44.724 Hello world! 00:14:44.724 [2024-07-13 15:26:15.190966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.724 15:26:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:44.724 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.724 [2024-07-13 15:26:15.472380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.118 Initializing NVMe Controllers 00:14:46.118 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.118 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.118 Initialization complete. Launching workers. 00:14:46.118 submit (in ns) avg, min, max = 7260.8, 3574.4, 4016528.9 00:14:46.118 complete (in ns) avg, min, max = 24637.5, 2071.1, 4027215.6 00:14:46.118 00:14:46.118 Submit histogram 00:14:46.118 ================ 00:14:46.118 Range in us Cumulative Count 00:14:46.118 3.556 - 3.579: 0.0074% ( 1) 00:14:46.118 3.579 - 3.603: 0.2652% ( 35) 00:14:46.118 3.603 - 3.627: 0.8399% ( 78) 00:14:46.118 3.627 - 3.650: 3.3670% ( 343) 00:14:46.118 3.650 - 3.674: 7.8907% ( 614) 00:14:46.118 3.674 - 3.698: 15.3245% ( 1009) 00:14:46.118 3.698 - 3.721: 25.7054% ( 1409) 00:14:46.118 3.721 - 3.745: 36.1821% ( 1422) 00:14:46.118 3.745 - 3.769: 44.3012% ( 1102) 00:14:46.118 3.769 - 3.793: 51.3814% ( 961) 00:14:46.118 3.793 - 3.816: 56.6713% ( 718) 00:14:46.118 3.816 - 3.840: 61.9907% ( 722) 00:14:46.118 3.840 - 3.864: 66.7870% ( 651) 00:14:46.118 3.864 - 3.887: 70.7360% ( 536) 00:14:46.118 3.887 - 3.911: 73.8820% ( 427) 00:14:46.118 3.911 - 3.935: 77.1974% ( 450) 00:14:46.118 3.935 - 3.959: 80.3360% ( 426) 00:14:46.118 3.959 - 3.982: 83.6293% ( 447) 00:14:46.118 3.982 - 4.006: 86.0827% ( 333) 00:14:46.118 4.006 - 4.030: 87.9761% ( 257) 00:14:46.118 4.030 - 4.053: 89.7222% ( 237) 00:14:46.118 4.053 - 4.077: 91.4831% ( 239) 00:14:46.118 4.077 - 4.101: 92.9934% ( 205) 00:14:46.118 4.101 - 4.124: 93.9218% ( 126) 00:14:46.118 4.124 - 4.148: 94.5480% ( 85) 00:14:46.118 4.148 - 4.172: 94.9458% ( 54) 00:14:46.118 4.172 - 4.196: 95.1890% ( 33) 00:14:46.118 4.196 - 4.219: 95.4542% ( 36) 00:14:46.118 4.219 - 4.243: 95.6237% ( 23) 00:14:46.118 4.243 - 4.267: 95.7047% ( 11) 00:14:46.118 4.267 - 4.290: 95.8447% ( 19) 00:14:46.118 4.290 - 4.314: 95.9699% ( 17) 00:14:46.118 4.314 - 4.338: 96.1026% ( 18) 00:14:46.118 4.338 - 4.361: 96.2131% ( 15) 00:14:46.118 4.361 - 4.385: 96.2794% ( 9) 00:14:46.118 4.385 - 4.409: 96.3162% ( 5) 00:14:46.118 4.409 - 4.433: 96.3457% ( 4) 00:14:46.118 4.433 - 4.456: 96.3531% ( 1) 00:14:46.118 4.456 - 4.480: 96.3752% ( 3) 00:14:46.118 4.480 - 4.504: 96.3825% ( 1) 00:14:46.118 4.504 - 4.527: 96.4341% ( 7) 00:14:46.118 4.527 - 4.551: 96.4709% ( 5) 00:14:46.118 4.551 - 4.575: 96.5151% ( 6) 00:14:46.118 4.575 - 4.599: 96.5372% ( 3) 00:14:46.118 4.599 - 4.622: 96.5741% ( 5) 00:14:46.118 4.622 - 4.646: 96.5888% ( 2) 00:14:46.118 4.646 - 4.670: 96.5962% ( 1) 00:14:46.118 4.670 - 4.693: 96.6551% ( 8) 00:14:46.118 4.693 - 4.717: 96.7288% ( 10) 00:14:46.118 4.717 - 4.741: 96.7804% ( 7) 00:14:46.118 4.741 - 4.764: 96.8246% ( 6) 00:14:46.118 4.764 - 4.788: 96.8762% ( 7) 00:14:46.118 4.788 - 4.812: 96.9130% ( 5) 00:14:46.118 4.812 - 4.836: 96.9646% ( 7) 00:14:46.118 4.836 - 4.859: 97.0014% ( 5) 00:14:46.118 4.859 - 4.883: 97.0603% ( 8) 00:14:46.118 4.883 - 4.907: 97.1119% ( 7) 00:14:46.118 4.907 - 4.930: 97.1488% ( 5) 00:14:46.118 4.930 - 4.954: 97.1930% ( 6) 00:14:46.118 4.954 - 4.978: 97.2372% ( 6) 00:14:46.118 4.978 - 5.001: 97.2887% ( 7) 00:14:46.118 5.001 - 5.025: 97.3403% ( 7) 00:14:46.118 5.025 - 5.049: 97.3550% ( 2) 00:14:46.118 5.049 - 5.073: 97.3771% ( 3) 00:14:46.118 5.073 - 5.096: 97.3919% ( 2) 00:14:46.118 5.096 - 5.120: 97.4656% ( 10) 00:14:46.118 5.120 - 5.144: 97.5171% ( 7) 00:14:46.118 5.144 - 5.167: 97.5466% ( 4) 00:14:46.118 5.167 - 5.191: 97.5761% ( 4) 00:14:46.118 5.191 - 5.215: 97.6645% ( 12) 00:14:46.118 5.215 - 5.239: 97.6940% ( 4) 00:14:46.118 5.239 - 5.262: 97.7161% ( 3) 00:14:46.118 5.262 - 5.286: 97.7382% ( 3) 00:14:46.118 5.286 - 5.310: 97.7971% ( 8) 00:14:46.118 5.310 - 5.333: 97.8339% ( 5) 00:14:46.118 5.333 - 5.357: 97.8781% ( 6) 00:14:46.118 5.357 - 5.381: 97.8929% ( 2) 00:14:46.118 5.381 - 5.404: 97.9371% ( 6) 00:14:46.118 5.404 - 5.428: 97.9666% ( 4) 00:14:46.118 5.428 - 5.452: 97.9960% ( 4) 00:14:46.118 5.452 - 5.476: 98.0108% ( 2) 00:14:46.118 5.476 - 5.499: 98.0402% ( 4) 00:14:46.118 5.523 - 5.547: 98.0550% ( 2) 00:14:46.118 5.547 - 5.570: 98.0697% ( 2) 00:14:46.118 5.570 - 5.594: 98.0992% ( 4) 00:14:46.118 5.594 - 5.618: 98.1065% ( 1) 00:14:46.118 5.618 - 5.641: 98.1213% ( 2) 00:14:46.118 5.641 - 5.665: 98.1360% ( 2) 00:14:46.118 5.665 - 5.689: 98.1434% ( 1) 00:14:46.118 5.689 - 5.713: 98.1581% ( 2) 00:14:46.119 5.713 - 5.736: 98.1728% ( 2) 00:14:46.119 5.736 - 5.760: 98.1802% ( 1) 00:14:46.119 5.760 - 5.784: 98.2023% ( 3) 00:14:46.119 5.784 - 5.807: 98.2097% ( 1) 00:14:46.119 5.807 - 5.831: 98.2392% ( 4) 00:14:46.119 5.831 - 5.855: 98.2539% ( 2) 00:14:46.119 5.855 - 5.879: 98.2613% ( 1) 00:14:46.119 5.879 - 5.902: 98.2981% ( 5) 00:14:46.119 5.926 - 5.950: 98.3128% ( 2) 00:14:46.119 5.950 - 5.973: 98.3276% ( 2) 00:14:46.119 5.997 - 6.021: 98.3349% ( 1) 00:14:46.119 6.021 - 6.044: 98.3497% ( 2) 00:14:46.119 6.068 - 6.116: 98.3570% ( 1) 00:14:46.119 6.163 - 6.210: 98.3644% ( 1) 00:14:46.119 6.210 - 6.258: 98.3791% ( 2) 00:14:46.119 6.353 - 6.400: 98.3865% ( 1) 00:14:46.119 6.400 - 6.447: 98.4086% ( 3) 00:14:46.119 6.447 - 6.495: 98.4233% ( 2) 00:14:46.119 6.495 - 6.542: 98.4381% ( 2) 00:14:46.119 6.590 - 6.637: 98.4454% ( 1) 00:14:46.119 6.684 - 6.732: 98.4602% ( 2) 00:14:46.119 6.732 - 6.779: 98.4749% ( 2) 00:14:46.119 6.779 - 6.827: 98.4823% ( 1) 00:14:46.119 6.921 - 6.969: 98.4896% ( 1) 00:14:46.119 7.016 - 7.064: 98.4970% ( 1) 00:14:46.119 7.111 - 7.159: 98.5044% ( 1) 00:14:46.119 7.159 - 7.206: 98.5118% ( 1) 00:14:46.119 7.253 - 7.301: 98.5339% ( 3) 00:14:46.119 7.301 - 7.348: 98.5633% ( 4) 00:14:46.119 7.348 - 7.396: 98.5854% ( 3) 00:14:46.119 7.396 - 7.443: 98.6002% ( 2) 00:14:46.119 7.490 - 7.538: 98.6296% ( 4) 00:14:46.119 7.585 - 7.633: 98.6370% ( 1) 00:14:46.119 7.633 - 7.680: 98.6517% ( 2) 00:14:46.119 7.680 - 7.727: 98.6591% ( 1) 00:14:46.119 7.727 - 7.775: 98.6665% ( 1) 00:14:46.119 7.775 - 7.822: 98.6738% ( 1) 00:14:46.119 7.822 - 7.870: 98.6886% ( 2) 00:14:46.119 7.964 - 8.012: 98.6959% ( 1) 00:14:46.119 8.012 - 8.059: 98.7107% ( 2) 00:14:46.119 8.059 - 8.107: 98.7180% ( 1) 00:14:46.119 8.107 - 8.154: 98.7254% ( 1) 00:14:46.119 8.344 - 8.391: 98.7328% ( 1) 00:14:46.119 8.391 - 8.439: 98.7401% ( 1) 00:14:46.119 8.439 - 8.486: 98.7475% ( 1) 00:14:46.119 8.486 - 8.533: 98.7549% ( 1) 00:14:46.119 8.533 - 8.581: 98.7622% ( 1) 00:14:46.119 8.581 - 8.628: 98.7844% ( 3) 00:14:46.119 8.818 - 8.865: 98.7991% ( 2) 00:14:46.119 8.960 - 9.007: 98.8065% ( 1) 00:14:46.119 9.007 - 9.055: 98.8138% ( 1) 00:14:46.119 9.150 - 9.197: 98.8212% ( 1) 00:14:46.119 9.197 - 9.244: 98.8286% ( 1) 00:14:46.119 9.434 - 9.481: 98.8359% ( 1) 00:14:46.119 9.576 - 9.624: 98.8507% ( 2) 00:14:46.119 9.719 - 9.766: 98.8580% ( 1) 00:14:46.119 9.861 - 9.908: 98.8654% ( 1) 00:14:46.119 10.145 - 10.193: 98.8728% ( 1) 00:14:46.119 10.287 - 10.335: 98.8801% ( 1) 00:14:46.119 10.335 - 10.382: 98.8949% ( 2) 00:14:46.119 10.619 - 10.667: 98.9022% ( 1) 00:14:46.119 10.714 - 10.761: 98.9096% ( 1) 00:14:46.119 10.999 - 11.046: 98.9170% ( 1) 00:14:46.119 11.473 - 11.520: 98.9317% ( 2) 00:14:46.119 11.567 - 11.615: 98.9391% ( 1) 00:14:46.119 11.662 - 11.710: 98.9464% ( 1) 00:14:46.119 11.710 - 11.757: 98.9538% ( 1) 00:14:46.119 12.136 - 12.231: 98.9612% ( 1) 00:14:46.119 12.231 - 12.326: 98.9685% ( 1) 00:14:46.119 12.516 - 12.610: 98.9759% ( 1) 00:14:46.119 12.800 - 12.895: 98.9833% ( 1) 00:14:46.119 12.895 - 12.990: 98.9980% ( 2) 00:14:46.119 12.990 - 13.084: 99.0054% ( 1) 00:14:46.119 13.179 - 13.274: 99.0127% ( 1) 00:14:46.119 13.274 - 13.369: 99.0201% ( 1) 00:14:46.119 13.748 - 13.843: 99.0275% ( 1) 00:14:46.119 14.317 - 14.412: 99.0348% ( 1) 00:14:46.119 14.601 - 14.696: 99.0496% ( 2) 00:14:46.119 14.791 - 14.886: 99.0717% ( 3) 00:14:46.119 16.024 - 16.119: 99.0791% ( 1) 00:14:46.119 17.067 - 17.161: 99.0864% ( 1) 00:14:46.119 17.256 - 17.351: 99.1012% ( 2) 00:14:46.119 17.351 - 17.446: 99.1233% ( 3) 00:14:46.119 17.446 - 17.541: 99.1306% ( 1) 00:14:46.119 17.541 - 17.636: 99.1601% ( 4) 00:14:46.119 17.636 - 17.730: 99.1822% ( 3) 00:14:46.119 17.730 - 17.825: 99.2190% ( 5) 00:14:46.119 17.825 - 17.920: 99.2485% ( 4) 00:14:46.119 17.920 - 18.015: 99.3148% ( 9) 00:14:46.119 18.015 - 18.110: 99.3590% ( 6) 00:14:46.119 18.110 - 18.204: 99.4032% ( 6) 00:14:46.119 18.204 - 18.299: 99.4474% ( 6) 00:14:46.119 18.299 - 18.394: 99.5064% ( 8) 00:14:46.119 18.394 - 18.489: 99.5432% ( 5) 00:14:46.119 18.489 - 18.584: 99.6095% ( 9) 00:14:46.119 18.584 - 18.679: 99.6316% ( 3) 00:14:46.119 18.679 - 18.773: 99.6685% ( 5) 00:14:46.119 18.773 - 18.868: 99.6832% ( 2) 00:14:46.119 18.868 - 18.963: 99.7127% ( 4) 00:14:46.119 18.963 - 19.058: 99.7348% ( 3) 00:14:46.119 19.058 - 19.153: 99.7495% ( 2) 00:14:46.119 19.153 - 19.247: 99.7642% ( 2) 00:14:46.119 19.247 - 19.342: 99.7863% ( 3) 00:14:46.119 19.342 - 19.437: 99.8084% ( 3) 00:14:46.119 19.437 - 19.532: 99.8305% ( 3) 00:14:46.119 19.627 - 19.721: 99.8600% ( 4) 00:14:46.119 19.721 - 19.816: 99.8748% ( 2) 00:14:46.119 19.911 - 20.006: 99.8821% ( 1) 00:14:46.119 20.196 - 20.290: 99.8895% ( 1) 00:14:46.119 20.385 - 20.480: 99.8969% ( 1) 00:14:46.119 21.523 - 21.618: 99.9042% ( 1) 00:14:46.119 21.902 - 21.997: 99.9116% ( 1) 00:14:46.119 201.766 - 203.283: 99.9190% ( 1) 00:14:46.119 3980.705 - 4004.978: 99.9853% ( 9) 00:14:46.119 4004.978 - 4029.250: 100.0000% ( 2) 00:14:46.119 00:14:46.119 Complete histogram 00:14:46.119 ================== 00:14:46.119 Range in us Cumulative Count 00:14:46.119 2.062 - 2.074: 0.0958% ( 13) 00:14:46.119 2.074 - 2.086: 20.4376% ( 2761) 00:14:46.119 2.086 - 2.098: 43.1150% ( 3078) 00:14:46.119 2.098 - 2.110: 46.6146% ( 475) 00:14:46.119 2.110 - 2.121: 55.3967% ( 1192) 00:14:46.119 2.121 - 2.133: 59.2131% ( 518) 00:14:46.119 2.133 - 2.145: 61.9907% ( 377) 00:14:46.119 2.145 - 2.157: 73.9335% ( 1621) 00:14:46.119 2.157 - 2.169: 80.0486% ( 830) 00:14:46.119 2.169 - 2.181: 81.7947% ( 237) 00:14:46.119 2.181 - 2.193: 85.7290% ( 534) 00:14:46.119 2.193 - 2.204: 87.4899% ( 239) 00:14:46.119 2.204 - 2.216: 88.2635% ( 105) 00:14:46.119 2.216 - 2.228: 90.1569% ( 257) 00:14:46.119 2.228 - 2.240: 92.0651% ( 259) 00:14:46.119 2.240 - 2.252: 93.1261% ( 144) 00:14:46.119 2.252 - 2.264: 93.6713% ( 74) 00:14:46.119 2.264 - 2.276: 93.9586% ( 39) 00:14:46.119 2.276 - 2.287: 94.1428% ( 25) 00:14:46.119 2.287 - 2.299: 94.2680% ( 17) 00:14:46.119 2.299 - 2.311: 94.5333% ( 36) 00:14:46.119 2.311 - 2.323: 94.7838% ( 34) 00:14:46.119 2.323 - 2.335: 94.8574% ( 10) 00:14:46.119 2.335 - 2.347: 94.9606% ( 14) 00:14:46.119 2.347 - 2.359: 95.1742% ( 29) 00:14:46.119 2.359 - 2.370: 95.4910% ( 43) 00:14:46.119 2.370 - 2.382: 95.7489% ( 35) 00:14:46.119 2.382 - 2.394: 96.0657% ( 43) 00:14:46.119 2.394 - 2.406: 96.2720% ( 28) 00:14:46.119 2.406 - 2.418: 96.4857% ( 29) 00:14:46.119 2.418 - 2.430: 96.6330% ( 20) 00:14:46.119 2.430 - 2.441: 96.7877% ( 21) 00:14:46.119 2.441 - 2.453: 96.8762% ( 12) 00:14:46.119 2.453 - 2.465: 96.9351% ( 8) 00:14:46.119 2.465 - 2.477: 96.9940% ( 8) 00:14:46.119 2.477 - 2.489: 97.0603% ( 9) 00:14:46.119 2.489 - 2.501: 97.1045% ( 6) 00:14:46.119 2.501 - 2.513: 97.1119% ( 1) 00:14:46.119 2.513 - 2.524: 97.1488% ( 5) 00:14:46.119 2.524 - 2.536: 97.1635% ( 2) 00:14:46.119 2.536 - 2.548: 97.2003% ( 5) 00:14:46.119 2.548 - 2.560: 97.2372% ( 5) 00:14:46.119 2.560 - 2.572: 97.2593% ( 3) 00:14:46.119 2.572 - 2.584: 97.2814% ( 3) 00:14:46.119 2.596 - 2.607: 97.2887% ( 1) 00:14:46.119 2.607 - 2.619: 97.3256% ( 5) 00:14:46.119 2.619 - 2.631: 97.3403% ( 2) 00:14:46.119 2.631 - 2.643: 97.3624% ( 3) 00:14:46.119 2.643 - 2.655: 97.3771% ( 2) 00:14:46.119 2.655 - 2.667: 97.4140% ( 5) 00:14:46.119 2.667 - 2.679: 97.4435% ( 4) 00:14:46.119 2.679 - 2.690: 97.4582% ( 2) 00:14:46.119 2.690 - 2.702: 97.4656% ( 1) 00:14:46.119 2.702 - 2.714: 97.5098% ( 6) 00:14:46.119 2.714 - 2.726: 97.5171% ( 1) 00:14:46.119 2.726 - 2.738: 97.5319% ( 2) 00:14:46.119 2.738 - 2.750: 97.5466% ( 2) 00:14:46.119 2.750 - 2.761: 97.5613% ( 2) 00:14:46.119 2.797 - 2.809: 97.5834% ( 3) 00:14:46.119 2.809 - 2.821: 97.5982% ( 2) 00:14:46.119 2.844 - 2.856: 97.6055% ( 1) 00:14:46.119 2.856 - 2.868: 97.6129% ( 1) 00:14:46.119 2.868 - 2.880: 97.6350% ( 3) 00:14:46.119 2.880 - 2.892: 97.6497% ( 2) 00:14:46.119 2.892 - 2.904: 97.6571% ( 1) 00:14:46.119 2.904 - 2.916: 97.6866% ( 4) 00:14:46.119 2.916 - 2.927: 97.6940% ( 1) 00:14:46.119 2.927 - 2.939: 97.7087% ( 2) 00:14:46.119 2.939 - 2.951: 97.7161% ( 1) 00:14:46.119 2.951 - 2.963: 97.7234% ( 1) 00:14:46.119 2.963 - 2.975: 97.7382% ( 2) 00:14:46.119 2.975 - 2.987: 97.7455% ( 1) 00:14:46.119 2.987 - 2.999: 97.7603% ( 2) 00:14:46.119 2.999 - 3.010: 97.7676% ( 1) 00:14:46.119 3.010 - 3.022: 97.7897% ( 3) 00:14:46.119 3.022 - 3.034: 97.8045% ( 2) 00:14:46.119 3.034 - 3.058: 97.8413% ( 5) 00:14:46.119 3.058 - 3.081: 97.8929% ( 7) 00:14:46.119 3.081 - 3.105: 97.9223% ( 4) 00:14:46.119 3.105 - 3.129: 97.9739% ( 7) 00:14:46.119 3.129 - 3.153: 98.0108% ( 5) 00:14:46.120 3.153 - 3.176: 98.0697% ( 8) 00:14:46.120 3.176 - 3.200: 98.1139% ( 6) 00:14:46.120 3.200 - 3.224: 98.1360% ( 3) 00:14:46.120 3.224 - 3.247: 98.1876% ( 7) 00:14:46.120 3.247 - 3.271: 98.1949% ( 1) 00:14:46.120 3.271 - 3.295: 98.2244% ( 4) 00:14:46.120 3.295 - 3.319: 98.2465% ( 3) 00:14:46.120 3.319 - 3.342: 98.2981% ( 7) 00:14:46.120 3.342 - 3.366: 98.3276% ( 4) 00:14:46.120 3.366 - 3.390: 98.3349% ( 1) 00:14:46.120 3.390 - 3.413: 98.4012% ( 9) 00:14:46.120 3.413 - 3.437: 98.4602% ( 8) 00:14:46.120 3.437 - 3.461: 98.4970% ( 5) 00:14:46.120 3.461 - 3.484: 98.5118% ( 2) 00:14:46.120 3.484 - 3.508: 98.5265% ( 2) 00:14:46.120 3.508 - 3.532: 98.5339% ( 1) 00:14:46.120 3.532 - 3.556: 98.5707% ( 5) 00:14:46.120 3.556 - 3.579: 98.6075% ( 5) 00:14:46.120 3.579 - 3.603: 98.6370% ( 4) 00:14:46.120 3.627 - 3.650: 98.6591% ( 3) 00:14:46.120 3.650 - 3.674: 98.6738% ( 2) 00:14:46.120 3.674 - 3.698: 98.6959% ( 3) 00:14:46.120 3.698 - 3.721: 98.7107% ( 2) 00:14:46.120 3.721 - 3.745: 98.7254% ( 2) 00:14:46.120 3.745 - 3.769: 98.7328% ( 1) 00:14:46.120 3.769 - 3.793: 98.7401% ( 1) 00:14:46.120 3.840 - 3.864: 98.7475% ( 1) 00:14:46.120 3.911 - 3.935: 98.7549% ( 1) 00:14:46.120 3.959 - 3.982: 98.7622% ( 1) 00:14:46.120 4.077 - 4.101: 98.7696% ( 1) 00:14:46.120 4.101 - 4.124: 98.7770% ( 1) 00:14:46.120 4.124 - 4.148: 98.7844% ( 1) 00:14:46.120 4.172 - 4.196: 98.7917% ( 1) 00:14:46.120 4.267 - 4.290: 98.7991% ( 1) 00:14:46.120 4.338 - 4.361: 98.8065% ( 1) 00:14:46.120 5.049 - 5.073: 98.8138% ( 1) 00:14:46.120 5.144 - 5.167: 98.8212% ( 1) 00:14:46.120 5.404 - 5.428: 98.8286% ( 1) 00:14:46.120 5.523 - 5.547: 98.8359% ( 1) 00:14:46.120 5.594 - 5.618: 98.8433% ( 1) 00:14:46.120 5.713 - 5.736: 98.8507% ( 1) 00:14:46.120 5.760 - 5.784: 98.8580% ( 1) 00:14:46.120 5.902 - 5.926: 98.8654% ( 1) 00:14:46.120 5.926 - 5.950: 98.8728% ( 1) 00:14:46.120 6.163 - 6.210: 98.8801% ( 1) 00:14:46.120 6.447 - 6.495: 98.8875% ( 1) 00:14:46.120 6.827 - 6.874: 98.8949% ( 1) 00:14:46.120 6.969 - 7.016: 98.9022% ( 1) 00:14:46.120 7.206 - 7.253: 98.9096% ( 1) 00:14:46.120 7.538 - 7.585: 98.9170% ( 1) 00:14:46.120 7.822 - 7.870: 98.9243% ( 1) 00:14:46.120 9.908 - 9.956: 98.9317% ( 1) 00:14:46.120 13.084 - 13.179: 98.9391%[2024-07-13 15:26:16.495506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.120 ( 1) 00:14:46.120 15.550 - 15.644: 98.9464% ( 1) 00:14:46.120 15.644 - 15.739: 98.9612% ( 2) 00:14:46.120 15.739 - 15.834: 98.9759% ( 2) 00:14:46.120 15.834 - 15.929: 98.9906% ( 2) 00:14:46.120 15.929 - 16.024: 99.0275% ( 5) 00:14:46.120 16.024 - 16.119: 99.0570% ( 4) 00:14:46.120 16.119 - 16.213: 99.0864% ( 4) 00:14:46.120 16.213 - 16.308: 99.1306% ( 6) 00:14:46.120 16.308 - 16.403: 99.1380% ( 1) 00:14:46.120 16.403 - 16.498: 99.1748% ( 5) 00:14:46.120 16.498 - 16.593: 99.2043% ( 4) 00:14:46.120 16.593 - 16.687: 99.2338% ( 4) 00:14:46.120 16.687 - 16.782: 99.2559% ( 3) 00:14:46.120 16.782 - 16.877: 99.2927% ( 5) 00:14:46.120 16.877 - 16.972: 99.3148% ( 3) 00:14:46.120 16.972 - 17.067: 99.3222% ( 1) 00:14:46.120 17.161 - 17.256: 99.3443% ( 3) 00:14:46.120 17.256 - 17.351: 99.3590% ( 2) 00:14:46.120 17.351 - 17.446: 99.3664% ( 1) 00:14:46.120 17.636 - 17.730: 99.3885% ( 3) 00:14:46.120 17.730 - 17.825: 99.3959% ( 1) 00:14:46.120 17.825 - 17.920: 99.4106% ( 2) 00:14:46.120 17.920 - 18.015: 99.4253% ( 2) 00:14:46.120 18.584 - 18.679: 99.4327% ( 1) 00:14:46.120 47.787 - 47.976: 99.4401% ( 1) 00:14:46.120 3980.705 - 4004.978: 99.8084% ( 50) 00:14:46.120 4004.978 - 4029.250: 100.0000% ( 26) 00:14:46.120 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:46.120 [ 00:14:46.120 { 00:14:46.120 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.120 "subtype": "Discovery", 00:14:46.120 "listen_addresses": [], 00:14:46.120 "allow_any_host": true, 00:14:46.120 "hosts": [] 00:14:46.120 }, 00:14:46.120 { 00:14:46.120 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.120 "subtype": "NVMe", 00:14:46.120 "listen_addresses": [ 00:14:46.120 { 00:14:46.120 "trtype": "VFIOUSER", 00:14:46.120 "adrfam": "IPv4", 00:14:46.120 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.120 "trsvcid": "0" 00:14:46.120 } 00:14:46.120 ], 00:14:46.120 "allow_any_host": true, 00:14:46.120 "hosts": [], 00:14:46.120 "serial_number": "SPDK1", 00:14:46.120 "model_number": "SPDK bdev Controller", 00:14:46.120 "max_namespaces": 32, 00:14:46.120 "min_cntlid": 1, 00:14:46.120 "max_cntlid": 65519, 00:14:46.120 "namespaces": [ 00:14:46.120 { 00:14:46.120 "nsid": 1, 00:14:46.120 "bdev_name": "Malloc1", 00:14:46.120 "name": "Malloc1", 00:14:46.120 "nguid": "7257432BC26649149B34C21E619C9321", 00:14:46.120 "uuid": "7257432b-c266-4914-9b34-c21e619c9321" 00:14:46.120 } 00:14:46.120 ] 00:14:46.120 }, 00:14:46.120 { 00:14:46.120 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.120 "subtype": "NVMe", 00:14:46.120 "listen_addresses": [ 00:14:46.120 { 00:14:46.120 "trtype": "VFIOUSER", 00:14:46.120 "adrfam": "IPv4", 00:14:46.120 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.120 "trsvcid": "0" 00:14:46.120 } 00:14:46.120 ], 00:14:46.120 "allow_any_host": true, 00:14:46.120 "hosts": [], 00:14:46.120 "serial_number": "SPDK2", 00:14:46.120 "model_number": "SPDK bdev Controller", 00:14:46.120 "max_namespaces": 32, 00:14:46.120 "min_cntlid": 1, 00:14:46.120 "max_cntlid": 65519, 00:14:46.120 "namespaces": [ 00:14:46.120 { 00:14:46.120 "nsid": 1, 00:14:46.120 "bdev_name": "Malloc2", 00:14:46.120 "name": "Malloc2", 00:14:46.120 "nguid": "01103B8045364B709915F0062D2F74C3", 00:14:46.120 "uuid": "01103b80-4536-4b70-9915-f0062d2f74c3" 00:14:46.120 } 00:14:46.120 ] 00:14:46.120 } 00:14:46.120 ] 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1068370 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:46.120 15:26:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:46.120 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.380 [2024-07-13 15:26:16.946322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.380 Malloc3 00:14:46.380 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:46.637 [2024-07-13 15:26:17.324091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.637 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:46.637 Asynchronous Event Request test 00:14:46.637 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.637 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:46.637 Registering asynchronous event callbacks... 00:14:46.637 Starting namespace attribute notice tests for all controllers... 00:14:46.637 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:46.637 aer_cb - Changed Namespace 00:14:46.637 Cleaning up... 00:14:46.896 [ 00:14:46.896 { 00:14:46.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:46.896 "subtype": "Discovery", 00:14:46.896 "listen_addresses": [], 00:14:46.896 "allow_any_host": true, 00:14:46.896 "hosts": [] 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:46.896 "subtype": "NVMe", 00:14:46.896 "listen_addresses": [ 00:14:46.896 { 00:14:46.896 "trtype": "VFIOUSER", 00:14:46.896 "adrfam": "IPv4", 00:14:46.896 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:46.896 "trsvcid": "0" 00:14:46.896 } 00:14:46.896 ], 00:14:46.896 "allow_any_host": true, 00:14:46.896 "hosts": [], 00:14:46.896 "serial_number": "SPDK1", 00:14:46.896 "model_number": "SPDK bdev Controller", 00:14:46.896 "max_namespaces": 32, 00:14:46.896 "min_cntlid": 1, 00:14:46.896 "max_cntlid": 65519, 00:14:46.896 "namespaces": [ 00:14:46.896 { 00:14:46.896 "nsid": 1, 00:14:46.896 "bdev_name": "Malloc1", 00:14:46.896 "name": "Malloc1", 00:14:46.896 "nguid": "7257432BC26649149B34C21E619C9321", 00:14:46.896 "uuid": "7257432b-c266-4914-9b34-c21e619c9321" 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "nsid": 2, 00:14:46.896 "bdev_name": "Malloc3", 00:14:46.896 "name": "Malloc3", 00:14:46.896 "nguid": "52ADAE666FEE4FD3B82A9805003145FA", 00:14:46.896 "uuid": "52adae66-6fee-4fd3-b82a-9805003145fa" 00:14:46.896 } 00:14:46.896 ] 00:14:46.896 }, 00:14:46.896 { 00:14:46.896 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:46.896 "subtype": "NVMe", 00:14:46.896 "listen_addresses": [ 00:14:46.896 { 00:14:46.896 "trtype": "VFIOUSER", 00:14:46.896 "adrfam": "IPv4", 00:14:46.896 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:46.896 "trsvcid": "0" 00:14:46.896 } 00:14:46.896 ], 00:14:46.896 "allow_any_host": true, 00:14:46.896 "hosts": [], 00:14:46.896 "serial_number": "SPDK2", 00:14:46.896 "model_number": "SPDK bdev Controller", 00:14:46.896 "max_namespaces": 32, 00:14:46.896 "min_cntlid": 1, 00:14:46.896 "max_cntlid": 65519, 00:14:46.896 "namespaces": [ 00:14:46.896 { 00:14:46.896 "nsid": 1, 00:14:46.896 "bdev_name": "Malloc2", 00:14:46.896 "name": "Malloc2", 00:14:46.896 "nguid": "01103B8045364B709915F0062D2F74C3", 00:14:46.896 "uuid": "01103b80-4536-4b70-9915-f0062d2f74c3" 00:14:46.896 } 00:14:46.896 ] 00:14:46.896 } 00:14:46.896 ] 00:14:46.896 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1068370 00:14:46.896 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.896 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:46.896 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:46.896 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.896 [2024-07-13 15:26:17.598321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:14:46.896 [2024-07-13 15:26:17.598370] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1068497 ] 00:14:46.896 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.896 [2024-07-13 15:26:17.616430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:46.896 [2024-07-13 15:26:17.633982] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:46.896 [2024-07-13 15:26:17.642216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.896 [2024-07-13 15:26:17.642248] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f721efb7000 00:14:46.896 [2024-07-13 15:26:17.643216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.644233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.645221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.646249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.647236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.648240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.649241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.650263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.896 [2024-07-13 15:26:17.651281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.896 [2024-07-13 15:26:17.651302] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f721dd79000 00:14:46.896 [2024-07-13 15:26:17.652414] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:47.154 [2024-07-13 15:26:17.668796] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:47.154 [2024-07-13 15:26:17.668835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:47.154 [2024-07-13 15:26:17.673995] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:47.154 [2024-07-13 15:26:17.674054] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:47.154 [2024-07-13 15:26:17.674145] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:47.154 [2024-07-13 15:26:17.674168] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:47.154 [2024-07-13 15:26:17.674180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:47.154 [2024-07-13 15:26:17.675001] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:47.154 [2024-07-13 15:26:17.675023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:47.154 [2024-07-13 15:26:17.675037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:47.154 [2024-07-13 15:26:17.676001] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:47.154 [2024-07-13 15:26:17.676022] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:47.154 [2024-07-13 15:26:17.676036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:47.154 [2024-07-13 15:26:17.677015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:47.154 [2024-07-13 15:26:17.677036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:47.154 [2024-07-13 15:26:17.678024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:47.154 [2024-07-13 15:26:17.678045] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:47.154 [2024-07-13 15:26:17.678055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:47.154 [2024-07-13 15:26:17.678073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:47.154 [2024-07-13 15:26:17.678187] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:47.154 [2024-07-13 15:26:17.678196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:47.154 [2024-07-13 15:26:17.678215] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:47.154 [2024-07-13 15:26:17.679035] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:47.154 [2024-07-13 15:26:17.680041] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:47.154 [2024-07-13 15:26:17.681049] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:47.154 [2024-07-13 15:26:17.682042] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.154 [2024-07-13 15:26:17.682110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:47.154 [2024-07-13 15:26:17.683063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:47.154 [2024-07-13 15:26:17.683084] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:47.154 [2024-07-13 15:26:17.683094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:47.154 [2024-07-13 15:26:17.683118] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:47.155 [2024-07-13 15:26:17.683132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.683173] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:47.155 [2024-07-13 15:26:17.683183] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.155 [2024-07-13 15:26:17.683202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.687885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.687910] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:47.155 [2024-07-13 15:26:17.687923] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:47.155 [2024-07-13 15:26:17.687932] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:47.155 [2024-07-13 15:26:17.687940] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:47.155 [2024-07-13 15:26:17.687948] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:47.155 [2024-07-13 15:26:17.687955] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:47.155 [2024-07-13 15:26:17.687963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.687976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.687996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.695894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.695931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.155 [2024-07-13 15:26:17.695947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.155 [2024-07-13 15:26:17.695959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.155 [2024-07-13 15:26:17.695971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:47.155 [2024-07-13 15:26:17.695980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.695995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.696010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.703876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.703895] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:47.155 [2024-07-13 15:26:17.703905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.703927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.703938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.703952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.711881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.711952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.711968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.711980] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:47.155 [2024-07-13 15:26:17.711989] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:47.155 [2024-07-13 15:26:17.711999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.719877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.719901] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:47.155 [2024-07-13 15:26:17.719921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.719939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.719953] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:47.155 [2024-07-13 15:26:17.719961] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.155 [2024-07-13 15:26:17.719971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.727877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.727907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.727924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.727937] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:47.155 [2024-07-13 15:26:17.727945] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.155 [2024-07-13 15:26:17.727955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.735875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.735898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735964] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:47.155 [2024-07-13 15:26:17.735972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:47.155 [2024-07-13 15:26:17.735981] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:47.155 [2024-07-13 15:26:17.736007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.743889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.743915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.751891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.751916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.759889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.759920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.767893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.767925] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:47.155 [2024-07-13 15:26:17.767937] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:47.155 [2024-07-13 15:26:17.767943] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:47.155 [2024-07-13 15:26:17.767949] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:47.155 [2024-07-13 15:26:17.767959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:47.155 [2024-07-13 15:26:17.767970] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:47.155 [2024-07-13 15:26:17.767978] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:47.155 [2024-07-13 15:26:17.767987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.767998] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:47.155 [2024-07-13 15:26:17.768005] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:47.155 [2024-07-13 15:26:17.768014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.768026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:47.155 [2024-07-13 15:26:17.768033] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:47.155 [2024-07-13 15:26:17.768042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:47.155 [2024-07-13 15:26:17.775878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.775907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.775924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:47.155 [2024-07-13 15:26:17.775936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:47.155 ===================================================== 00:14:47.155 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:47.155 ===================================================== 00:14:47.155 Controller Capabilities/Features 00:14:47.155 ================================ 00:14:47.155 Vendor ID: 4e58 00:14:47.155 Subsystem Vendor ID: 4e58 00:14:47.155 Serial Number: SPDK2 00:14:47.155 Model Number: SPDK bdev Controller 00:14:47.155 Firmware Version: 24.09 00:14:47.155 Recommended Arb Burst: 6 00:14:47.155 IEEE OUI Identifier: 8d 6b 50 00:14:47.155 Multi-path I/O 00:14:47.155 May have multiple subsystem ports: Yes 00:14:47.155 May have multiple controllers: Yes 00:14:47.155 Associated with SR-IOV VF: No 00:14:47.155 Max Data Transfer Size: 131072 00:14:47.155 Max Number of Namespaces: 32 00:14:47.155 Max Number of I/O Queues: 127 00:14:47.155 NVMe Specification Version (VS): 1.3 00:14:47.155 NVMe Specification Version (Identify): 1.3 00:14:47.155 Maximum Queue Entries: 256 00:14:47.155 Contiguous Queues Required: Yes 00:14:47.155 Arbitration Mechanisms Supported 00:14:47.155 Weighted Round Robin: Not Supported 00:14:47.155 Vendor Specific: Not Supported 00:14:47.155 Reset Timeout: 15000 ms 00:14:47.155 Doorbell Stride: 4 bytes 00:14:47.155 NVM Subsystem Reset: Not Supported 00:14:47.155 Command Sets Supported 00:14:47.155 NVM Command Set: Supported 00:14:47.155 Boot Partition: Not Supported 00:14:47.155 Memory Page Size Minimum: 4096 bytes 00:14:47.155 Memory Page Size Maximum: 4096 bytes 00:14:47.155 Persistent Memory Region: Not Supported 00:14:47.155 Optional Asynchronous Events Supported 00:14:47.155 Namespace Attribute Notices: Supported 00:14:47.155 Firmware Activation Notices: Not Supported 00:14:47.155 ANA Change Notices: Not Supported 00:14:47.155 PLE Aggregate Log Change Notices: Not Supported 00:14:47.155 LBA Status Info Alert Notices: Not Supported 00:14:47.155 EGE Aggregate Log Change Notices: Not Supported 00:14:47.155 Normal NVM Subsystem Shutdown event: Not Supported 00:14:47.155 Zone Descriptor Change Notices: Not Supported 00:14:47.155 Discovery Log Change Notices: Not Supported 00:14:47.155 Controller Attributes 00:14:47.155 128-bit Host Identifier: Supported 00:14:47.155 Non-Operational Permissive Mode: Not Supported 00:14:47.155 NVM Sets: Not Supported 00:14:47.155 Read Recovery Levels: Not Supported 00:14:47.155 Endurance Groups: Not Supported 00:14:47.155 Predictable Latency Mode: Not Supported 00:14:47.156 Traffic Based Keep ALive: Not Supported 00:14:47.156 Namespace Granularity: Not Supported 00:14:47.156 SQ Associations: Not Supported 00:14:47.156 UUID List: Not Supported 00:14:47.156 Multi-Domain Subsystem: Not Supported 00:14:47.156 Fixed Capacity Management: Not Supported 00:14:47.156 Variable Capacity Management: Not Supported 00:14:47.156 Delete Endurance Group: Not Supported 00:14:47.156 Delete NVM Set: Not Supported 00:14:47.156 Extended LBA Formats Supported: Not Supported 00:14:47.156 Flexible Data Placement Supported: Not Supported 00:14:47.156 00:14:47.156 Controller Memory Buffer Support 00:14:47.156 ================================ 00:14:47.156 Supported: No 00:14:47.156 00:14:47.156 Persistent Memory Region Support 00:14:47.156 ================================ 00:14:47.156 Supported: No 00:14:47.156 00:14:47.156 Admin Command Set Attributes 00:14:47.156 ============================ 00:14:47.156 Security Send/Receive: Not Supported 00:14:47.156 Format NVM: Not Supported 00:14:47.156 Firmware Activate/Download: Not Supported 00:14:47.156 Namespace Management: Not Supported 00:14:47.156 Device Self-Test: Not Supported 00:14:47.156 Directives: Not Supported 00:14:47.156 NVMe-MI: Not Supported 00:14:47.156 Virtualization Management: Not Supported 00:14:47.156 Doorbell Buffer Config: Not Supported 00:14:47.156 Get LBA Status Capability: Not Supported 00:14:47.156 Command & Feature Lockdown Capability: Not Supported 00:14:47.156 Abort Command Limit: 4 00:14:47.156 Async Event Request Limit: 4 00:14:47.156 Number of Firmware Slots: N/A 00:14:47.156 Firmware Slot 1 Read-Only: N/A 00:14:47.156 Firmware Activation Without Reset: N/A 00:14:47.156 Multiple Update Detection Support: N/A 00:14:47.156 Firmware Update Granularity: No Information Provided 00:14:47.156 Per-Namespace SMART Log: No 00:14:47.156 Asymmetric Namespace Access Log Page: Not Supported 00:14:47.156 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:47.156 Command Effects Log Page: Supported 00:14:47.156 Get Log Page Extended Data: Supported 00:14:47.156 Telemetry Log Pages: Not Supported 00:14:47.156 Persistent Event Log Pages: Not Supported 00:14:47.156 Supported Log Pages Log Page: May Support 00:14:47.156 Commands Supported & Effects Log Page: Not Supported 00:14:47.156 Feature Identifiers & Effects Log Page:May Support 00:14:47.156 NVMe-MI Commands & Effects Log Page: May Support 00:14:47.156 Data Area 4 for Telemetry Log: Not Supported 00:14:47.156 Error Log Page Entries Supported: 128 00:14:47.156 Keep Alive: Supported 00:14:47.156 Keep Alive Granularity: 10000 ms 00:14:47.156 00:14:47.156 NVM Command Set Attributes 00:14:47.156 ========================== 00:14:47.156 Submission Queue Entry Size 00:14:47.156 Max: 64 00:14:47.156 Min: 64 00:14:47.156 Completion Queue Entry Size 00:14:47.156 Max: 16 00:14:47.156 Min: 16 00:14:47.156 Number of Namespaces: 32 00:14:47.156 Compare Command: Supported 00:14:47.156 Write Uncorrectable Command: Not Supported 00:14:47.156 Dataset Management Command: Supported 00:14:47.156 Write Zeroes Command: Supported 00:14:47.156 Set Features Save Field: Not Supported 00:14:47.156 Reservations: Not Supported 00:14:47.156 Timestamp: Not Supported 00:14:47.156 Copy: Supported 00:14:47.156 Volatile Write Cache: Present 00:14:47.156 Atomic Write Unit (Normal): 1 00:14:47.156 Atomic Write Unit (PFail): 1 00:14:47.156 Atomic Compare & Write Unit: 1 00:14:47.156 Fused Compare & Write: Supported 00:14:47.156 Scatter-Gather List 00:14:47.156 SGL Command Set: Supported (Dword aligned) 00:14:47.156 SGL Keyed: Not Supported 00:14:47.156 SGL Bit Bucket Descriptor: Not Supported 00:14:47.156 SGL Metadata Pointer: Not Supported 00:14:47.156 Oversized SGL: Not Supported 00:14:47.156 SGL Metadata Address: Not Supported 00:14:47.156 SGL Offset: Not Supported 00:14:47.156 Transport SGL Data Block: Not Supported 00:14:47.156 Replay Protected Memory Block: Not Supported 00:14:47.156 00:14:47.156 Firmware Slot Information 00:14:47.156 ========================= 00:14:47.156 Active slot: 1 00:14:47.156 Slot 1 Firmware Revision: 24.09 00:14:47.156 00:14:47.156 00:14:47.156 Commands Supported and Effects 00:14:47.156 ============================== 00:14:47.156 Admin Commands 00:14:47.156 -------------- 00:14:47.156 Get Log Page (02h): Supported 00:14:47.156 Identify (06h): Supported 00:14:47.156 Abort (08h): Supported 00:14:47.156 Set Features (09h): Supported 00:14:47.156 Get Features (0Ah): Supported 00:14:47.156 Asynchronous Event Request (0Ch): Supported 00:14:47.156 Keep Alive (18h): Supported 00:14:47.156 I/O Commands 00:14:47.156 ------------ 00:14:47.156 Flush (00h): Supported LBA-Change 00:14:47.156 Write (01h): Supported LBA-Change 00:14:47.156 Read (02h): Supported 00:14:47.156 Compare (05h): Supported 00:14:47.156 Write Zeroes (08h): Supported LBA-Change 00:14:47.156 Dataset Management (09h): Supported LBA-Change 00:14:47.156 Copy (19h): Supported LBA-Change 00:14:47.156 00:14:47.156 Error Log 00:14:47.156 ========= 00:14:47.156 00:14:47.156 Arbitration 00:14:47.156 =========== 00:14:47.156 Arbitration Burst: 1 00:14:47.156 00:14:47.156 Power Management 00:14:47.156 ================ 00:14:47.156 Number of Power States: 1 00:14:47.156 Current Power State: Power State #0 00:14:47.156 Power State #0: 00:14:47.156 Max Power: 0.00 W 00:14:47.156 Non-Operational State: Operational 00:14:47.156 Entry Latency: Not Reported 00:14:47.156 Exit Latency: Not Reported 00:14:47.156 Relative Read Throughput: 0 00:14:47.156 Relative Read Latency: 0 00:14:47.156 Relative Write Throughput: 0 00:14:47.156 Relative Write Latency: 0 00:14:47.156 Idle Power: Not Reported 00:14:47.156 Active Power: Not Reported 00:14:47.156 Non-Operational Permissive Mode: Not Supported 00:14:47.156 00:14:47.156 Health Information 00:14:47.156 ================== 00:14:47.156 Critical Warnings: 00:14:47.156 Available Spare Space: OK 00:14:47.156 Temperature: OK 00:14:47.156 Device Reliability: OK 00:14:47.156 Read Only: No 00:14:47.156 Volatile Memory Backup: OK 00:14:47.156 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:47.156 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:47.156 Available Spare: 0% 00:14:47.156 Available Sp[2024-07-13 15:26:17.776047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:47.156 [2024-07-13 15:26:17.783874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:47.156 [2024-07-13 15:26:17.783926] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:47.156 [2024-07-13 15:26:17.783943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.156 [2024-07-13 15:26:17.783954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.156 [2024-07-13 15:26:17.783964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.156 [2024-07-13 15:26:17.783974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:47.156 [2024-07-13 15:26:17.784062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:47.156 [2024-07-13 15:26:17.784083] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:47.156 [2024-07-13 15:26:17.785064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.156 [2024-07-13 15:26:17.785134] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:47.156 [2024-07-13 15:26:17.785149] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:47.156 [2024-07-13 15:26:17.787891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:47.156 [2024-07-13 15:26:17.787915] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 2 milliseconds 00:14:47.156 [2024-07-13 15:26:17.787967] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:47.156 [2024-07-13 15:26:17.789154] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:47.156 are Threshold: 0% 00:14:47.156 Life Percentage Used: 0% 00:14:47.156 Data Units Read: 0 00:14:47.156 Data Units Written: 0 00:14:47.156 Host Read Commands: 0 00:14:47.156 Host Write Commands: 0 00:14:47.156 Controller Busy Time: 0 minutes 00:14:47.156 Power Cycles: 0 00:14:47.156 Power On Hours: 0 hours 00:14:47.156 Unsafe Shutdowns: 0 00:14:47.156 Unrecoverable Media Errors: 0 00:14:47.156 Lifetime Error Log Entries: 0 00:14:47.156 Warning Temperature Time: 0 minutes 00:14:47.156 Critical Temperature Time: 0 minutes 00:14:47.156 00:14:47.156 Number of Queues 00:14:47.156 ================ 00:14:47.156 Number of I/O Submission Queues: 127 00:14:47.156 Number of I/O Completion Queues: 127 00:14:47.156 00:14:47.156 Active Namespaces 00:14:47.156 ================= 00:14:47.156 Namespace ID:1 00:14:47.156 Error Recovery Timeout: Unlimited 00:14:47.156 Command Set Identifier: NVM (00h) 00:14:47.156 Deallocate: Supported 00:14:47.156 Deallocated/Unwritten Error: Not Supported 00:14:47.157 Deallocated Read Value: Unknown 00:14:47.157 Deallocate in Write Zeroes: Not Supported 00:14:47.157 Deallocated Guard Field: 0xFFFF 00:14:47.157 Flush: Supported 00:14:47.157 Reservation: Supported 00:14:47.157 Namespace Sharing Capabilities: Multiple Controllers 00:14:47.157 Size (in LBAs): 131072 (0GiB) 00:14:47.157 Capacity (in LBAs): 131072 (0GiB) 00:14:47.157 Utilization (in LBAs): 131072 (0GiB) 00:14:47.157 NGUID: 01103B8045364B709915F0062D2F74C3 00:14:47.157 UUID: 01103b80-4536-4b70-9915-f0062d2f74c3 00:14:47.157 Thin Provisioning: Not Supported 00:14:47.157 Per-NS Atomic Units: Yes 00:14:47.157 Atomic Boundary Size (Normal): 0 00:14:47.157 Atomic Boundary Size (PFail): 0 00:14:47.157 Atomic Boundary Offset: 0 00:14:47.157 Maximum Single Source Range Length: 65535 00:14:47.157 Maximum Copy Length: 65535 00:14:47.157 Maximum Source Range Count: 1 00:14:47.157 NGUID/EUI64 Never Reused: No 00:14:47.157 Namespace Write Protected: No 00:14:47.157 Number of LBA Formats: 1 00:14:47.157 Current LBA Format: LBA Format #00 00:14:47.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:47.157 00:14:47.157 15:26:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:47.157 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.414 [2024-07-13 15:26:18.018663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.680 Initializing NVMe Controllers 00:14:52.680 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:52.680 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:52.680 Initialization complete. Launching workers. 00:14:52.680 ======================================================== 00:14:52.680 Latency(us) 00:14:52.680 Device Information : IOPS MiB/s Average min max 00:14:52.680 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35067.26 136.98 3649.44 1162.79 8256.43 00:14:52.680 ======================================================== 00:14:52.680 Total : 35067.26 136.98 3649.44 1162.79 8256.43 00:14:52.680 00:14:52.680 [2024-07-13 15:26:23.124251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.680 15:26:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:52.680 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.680 [2024-07-13 15:26:23.365936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:57.943 Initializing NVMe Controllers 00:14:57.943 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:57.943 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:57.943 Initialization complete. Launching workers. 00:14:57.943 ======================================================== 00:14:57.943 Latency(us) 00:14:57.943 Device Information : IOPS MiB/s Average min max 00:14:57.943 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32585.86 127.29 3927.30 1207.22 8254.79 00:14:57.943 ======================================================== 00:14:57.943 Total : 32585.86 127.29 3927.30 1207.22 8254.79 00:14:57.943 00:14:57.943 [2024-07-13 15:26:28.389184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:57.943 15:26:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.943 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.943 [2024-07-13 15:26:28.607992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.206 [2024-07-13 15:26:33.746023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.206 Initializing NVMe Controllers 00:15:03.206 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.206 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:03.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:03.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:03.206 Initialization complete. Launching workers. 00:15:03.206 Starting thread on core 2 00:15:03.206 Starting thread on core 3 00:15:03.206 Starting thread on core 1 00:15:03.206 15:26:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:03.206 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.465 [2024-07-13 15:26:34.048628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.748 [2024-07-13 15:26:37.102707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.748 Initializing NVMe Controllers 00:15:06.748 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.748 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.748 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:06.748 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:06.748 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:06.748 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:06.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:06.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:06.748 Initialization complete. Launching workers. 00:15:06.748 Starting thread on core 1 with urgent priority queue 00:15:06.748 Starting thread on core 2 with urgent priority queue 00:15:06.748 Starting thread on core 3 with urgent priority queue 00:15:06.748 Starting thread on core 0 with urgent priority queue 00:15:06.748 SPDK bdev Controller (SPDK2 ) core 0: 4785.67 IO/s 20.90 secs/100000 ios 00:15:06.748 SPDK bdev Controller (SPDK2 ) core 1: 4491.67 IO/s 22.26 secs/100000 ios 00:15:06.748 SPDK bdev Controller (SPDK2 ) core 2: 4615.33 IO/s 21.67 secs/100000 ios 00:15:06.748 SPDK bdev Controller (SPDK2 ) core 3: 4877.33 IO/s 20.50 secs/100000 ios 00:15:06.748 ======================================================== 00:15:06.748 00:15:06.748 15:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:06.748 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.748 [2024-07-13 15:26:37.408453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.748 Initializing NVMe Controllers 00:15:06.748 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.748 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.748 Namespace ID: 1 size: 0GB 00:15:06.748 Initialization complete. 00:15:06.748 INFO: using host memory buffer for IO 00:15:06.748 Hello world! 00:15:06.749 [2024-07-13 15:26:37.417508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.749 15:26:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:07.005 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.005 [2024-07-13 15:26:37.716644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.376 Initializing NVMe Controllers 00:15:08.376 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.376 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.376 Initialization complete. Launching workers. 00:15:08.376 submit (in ns) avg, min, max = 7516.7, 3507.8, 4015958.9 00:15:08.376 complete (in ns) avg, min, max = 25843.6, 2057.8, 4021020.0 00:15:08.376 00:15:08.376 Submit histogram 00:15:08.376 ================ 00:15:08.376 Range in us Cumulative Count 00:15:08.376 3.484 - 3.508: 0.0075% ( 1) 00:15:08.376 3.508 - 3.532: 0.4493% ( 59) 00:15:08.376 3.532 - 3.556: 1.1307% ( 91) 00:15:08.376 3.556 - 3.579: 3.4744% ( 313) 00:15:08.376 3.579 - 3.603: 7.8847% ( 589) 00:15:08.376 3.603 - 3.627: 14.3841% ( 868) 00:15:08.376 3.627 - 3.650: 23.8113% ( 1259) 00:15:08.376 3.650 - 3.674: 33.2684% ( 1263) 00:15:08.376 3.674 - 3.698: 42.1565% ( 1187) 00:15:08.376 3.698 - 3.721: 50.1984% ( 1074) 00:15:08.376 3.721 - 3.745: 56.2711% ( 811) 00:15:08.376 3.745 - 3.769: 61.5051% ( 699) 00:15:08.376 3.769 - 3.793: 66.1176% ( 616) 00:15:08.376 3.793 - 3.816: 69.8315% ( 496) 00:15:08.376 3.816 - 3.840: 72.9689% ( 419) 00:15:08.376 3.840 - 3.864: 76.3609% ( 453) 00:15:08.376 3.864 - 3.887: 79.2737% ( 389) 00:15:08.376 3.887 - 3.911: 82.2389% ( 396) 00:15:08.376 3.911 - 3.935: 85.1217% ( 385) 00:15:08.376 3.935 - 3.959: 87.2258% ( 281) 00:15:08.376 3.959 - 3.982: 89.1876% ( 262) 00:15:08.376 3.982 - 4.006: 91.0146% ( 244) 00:15:08.376 4.006 - 4.030: 92.3774% ( 182) 00:15:08.376 4.030 - 4.053: 93.6204% ( 166) 00:15:08.376 4.053 - 4.077: 94.4964% ( 117) 00:15:08.376 4.077 - 4.101: 95.1778% ( 91) 00:15:08.376 4.101 - 4.124: 95.6271% ( 60) 00:15:08.376 4.124 - 4.148: 95.9641% ( 45) 00:15:08.376 4.148 - 4.172: 96.1063% ( 19) 00:15:08.376 4.172 - 4.196: 96.2336% ( 17) 00:15:08.376 4.196 - 4.219: 96.3984% ( 22) 00:15:08.376 4.219 - 4.243: 96.5107% ( 15) 00:15:08.376 4.243 - 4.267: 96.6305% ( 16) 00:15:08.376 4.267 - 4.290: 96.7353% ( 14) 00:15:08.376 4.290 - 4.314: 96.9075% ( 23) 00:15:08.376 4.314 - 4.338: 97.0273% ( 16) 00:15:08.376 4.338 - 4.361: 97.0573% ( 4) 00:15:08.376 4.361 - 4.385: 97.1022% ( 6) 00:15:08.376 4.385 - 4.409: 97.1322% ( 4) 00:15:08.376 4.409 - 4.433: 97.1846% ( 7) 00:15:08.376 4.433 - 4.456: 97.2070% ( 3) 00:15:08.376 4.456 - 4.480: 97.2145% ( 1) 00:15:08.376 4.480 - 4.504: 97.2595% ( 6) 00:15:08.376 4.504 - 4.527: 97.2744% ( 2) 00:15:08.376 4.527 - 4.551: 97.2819% ( 1) 00:15:08.376 4.551 - 4.575: 97.2894% ( 1) 00:15:08.376 4.599 - 4.622: 97.3044% ( 2) 00:15:08.376 4.622 - 4.646: 97.3194% ( 2) 00:15:08.376 4.670 - 4.693: 97.3343% ( 2) 00:15:08.376 4.693 - 4.717: 97.3418% ( 1) 00:15:08.376 4.717 - 4.741: 97.3568% ( 2) 00:15:08.376 4.741 - 4.764: 97.3643% ( 1) 00:15:08.376 4.764 - 4.788: 97.3793% ( 2) 00:15:08.376 4.788 - 4.812: 97.3942% ( 2) 00:15:08.376 4.812 - 4.836: 97.4167% ( 3) 00:15:08.376 4.836 - 4.859: 97.4242% ( 1) 00:15:08.376 4.859 - 4.883: 97.4766% ( 7) 00:15:08.376 4.883 - 4.907: 97.5365% ( 8) 00:15:08.376 4.907 - 4.930: 97.5739% ( 5) 00:15:08.376 4.930 - 4.954: 97.6264% ( 7) 00:15:08.376 4.954 - 4.978: 97.6488% ( 3) 00:15:08.376 4.978 - 5.001: 97.6863% ( 5) 00:15:08.376 5.001 - 5.025: 97.7387% ( 7) 00:15:08.376 5.025 - 5.049: 97.7686% ( 4) 00:15:08.376 5.049 - 5.073: 97.8210% ( 7) 00:15:08.376 5.073 - 5.096: 97.8660% ( 6) 00:15:08.376 5.096 - 5.120: 97.9109% ( 6) 00:15:08.376 5.120 - 5.144: 97.9334% ( 3) 00:15:08.376 5.144 - 5.167: 97.9858% ( 7) 00:15:08.376 5.167 - 5.191: 98.0232% ( 5) 00:15:08.376 5.191 - 5.215: 98.0457% ( 3) 00:15:08.376 5.215 - 5.239: 98.0681% ( 3) 00:15:08.376 5.239 - 5.262: 98.1056% ( 5) 00:15:08.376 5.262 - 5.286: 98.1206% ( 2) 00:15:08.376 5.286 - 5.310: 98.1505% ( 4) 00:15:08.376 5.310 - 5.333: 98.2104% ( 8) 00:15:08.376 5.333 - 5.357: 98.2179% ( 1) 00:15:08.376 5.357 - 5.381: 98.2478% ( 4) 00:15:08.376 5.381 - 5.404: 98.2853% ( 5) 00:15:08.376 5.404 - 5.428: 98.3077% ( 3) 00:15:08.376 5.428 - 5.452: 98.3302% ( 3) 00:15:08.376 5.452 - 5.476: 98.3677% ( 5) 00:15:08.376 5.476 - 5.499: 98.3976% ( 4) 00:15:08.376 5.499 - 5.523: 98.4126% ( 2) 00:15:08.376 5.523 - 5.547: 98.4276% ( 2) 00:15:08.376 5.547 - 5.570: 98.4500% ( 3) 00:15:08.376 5.570 - 5.594: 98.4650% ( 2) 00:15:08.376 5.594 - 5.618: 98.4949% ( 4) 00:15:08.376 5.618 - 5.641: 98.5024% ( 1) 00:15:08.376 5.641 - 5.665: 98.5099% ( 1) 00:15:08.376 5.665 - 5.689: 98.5399% ( 4) 00:15:08.376 5.689 - 5.713: 98.5474% ( 1) 00:15:08.376 5.736 - 5.760: 98.5698% ( 3) 00:15:08.376 5.760 - 5.784: 98.5773% ( 1) 00:15:08.376 5.855 - 5.879: 98.5848% ( 1) 00:15:08.376 5.879 - 5.902: 98.5923% ( 1) 00:15:08.376 5.926 - 5.950: 98.5998% ( 1) 00:15:08.376 6.068 - 6.116: 98.6073% ( 1) 00:15:08.376 6.305 - 6.353: 98.6148% ( 1) 00:15:08.376 6.400 - 6.447: 98.6222% ( 1) 00:15:08.376 6.447 - 6.495: 98.6297% ( 1) 00:15:08.376 6.637 - 6.684: 98.6372% ( 1) 00:15:08.376 6.732 - 6.779: 98.6597% ( 3) 00:15:08.376 6.874 - 6.921: 98.6821% ( 3) 00:15:08.376 6.921 - 6.969: 98.6896% ( 1) 00:15:08.376 7.064 - 7.111: 98.6971% ( 1) 00:15:08.376 7.111 - 7.159: 98.7046% ( 1) 00:15:08.376 7.253 - 7.301: 98.7121% ( 1) 00:15:08.376 7.301 - 7.348: 98.7196% ( 1) 00:15:08.376 7.348 - 7.396: 98.7420% ( 3) 00:15:08.376 7.443 - 7.490: 98.7495% ( 1) 00:15:08.376 7.490 - 7.538: 98.7570% ( 1) 00:15:08.376 7.538 - 7.585: 98.7645% ( 1) 00:15:08.376 7.680 - 7.727: 98.7720% ( 1) 00:15:08.376 7.775 - 7.822: 98.7795% ( 1) 00:15:08.376 7.917 - 7.964: 98.7870% ( 1) 00:15:08.376 7.964 - 8.012: 98.7945% ( 1) 00:15:08.376 8.012 - 8.059: 98.8019% ( 1) 00:15:08.376 8.154 - 8.201: 98.8094% ( 1) 00:15:08.376 8.201 - 8.249: 98.8169% ( 1) 00:15:08.376 8.581 - 8.628: 98.8244% ( 1) 00:15:08.376 8.628 - 8.676: 98.8394% ( 2) 00:15:08.376 8.723 - 8.770: 98.8469% ( 1) 00:15:08.376 8.818 - 8.865: 98.8544% ( 1) 00:15:08.376 9.007 - 9.055: 98.8618% ( 1) 00:15:08.376 9.102 - 9.150: 98.8693% ( 1) 00:15:08.376 9.150 - 9.197: 98.8768% ( 1) 00:15:08.376 9.339 - 9.387: 98.8843% ( 1) 00:15:08.376 9.387 - 9.434: 98.8918% ( 1) 00:15:08.376 9.434 - 9.481: 98.9068% ( 2) 00:15:08.376 9.481 - 9.529: 98.9143% ( 1) 00:15:08.376 9.529 - 9.576: 98.9292% ( 2) 00:15:08.376 9.813 - 9.861: 98.9367% ( 1) 00:15:08.376 9.956 - 10.003: 98.9442% ( 1) 00:15:08.376 10.098 - 10.145: 98.9517% ( 1) 00:15:08.376 10.145 - 10.193: 98.9592% ( 1) 00:15:08.376 10.335 - 10.382: 98.9667% ( 1) 00:15:08.376 10.382 - 10.430: 98.9742% ( 1) 00:15:08.376 10.572 - 10.619: 98.9891% ( 2) 00:15:08.376 10.714 - 10.761: 98.9966% ( 1) 00:15:08.376 10.809 - 10.856: 99.0041% ( 1) 00:15:08.376 11.046 - 11.093: 99.0116% ( 1) 00:15:08.376 11.093 - 11.141: 99.0266% ( 2) 00:15:08.376 11.236 - 11.283: 99.0341% ( 1) 00:15:08.376 11.473 - 11.520: 99.0416% ( 1) 00:15:08.376 11.567 - 11.615: 99.0565% ( 2) 00:15:08.376 11.710 - 11.757: 99.0640% ( 1) 00:15:08.376 11.757 - 11.804: 99.0715% ( 1) 00:15:08.376 11.899 - 11.947: 99.0790% ( 1) 00:15:08.376 11.947 - 11.994: 99.0865% ( 1) 00:15:08.376 12.136 - 12.231: 99.0940% ( 1) 00:15:08.376 12.231 - 12.326: 99.1089% ( 2) 00:15:08.376 12.421 - 12.516: 99.1164% ( 1) 00:15:08.376 12.990 - 13.084: 99.1239% ( 1) 00:15:08.376 13.084 - 13.179: 99.1314% ( 1) 00:15:08.376 13.748 - 13.843: 99.1389% ( 1) 00:15:08.377 13.843 - 13.938: 99.1464% ( 1) 00:15:08.377 14.033 - 14.127: 99.1539% ( 1) 00:15:08.377 14.127 - 14.222: 99.1614% ( 1) 00:15:08.377 14.317 - 14.412: 99.1689% ( 1) 00:15:08.377 14.696 - 14.791: 99.1763% ( 1) 00:15:08.377 17.161 - 17.256: 99.1838% ( 1) 00:15:08.377 17.256 - 17.351: 99.2213% ( 5) 00:15:08.377 17.351 - 17.446: 99.2288% ( 1) 00:15:08.377 17.446 - 17.541: 99.2512% ( 3) 00:15:08.377 17.541 - 17.636: 99.2737% ( 3) 00:15:08.377 17.636 - 17.730: 99.3111% ( 5) 00:15:08.377 17.730 - 17.825: 99.3486% ( 5) 00:15:08.377 17.825 - 17.920: 99.3710% ( 3) 00:15:08.377 17.920 - 18.015: 99.4010% ( 4) 00:15:08.377 18.015 - 18.110: 99.4309% ( 4) 00:15:08.377 18.110 - 18.204: 99.5133% ( 11) 00:15:08.377 18.204 - 18.299: 99.5657% ( 7) 00:15:08.377 18.299 - 18.394: 99.5807% ( 2) 00:15:08.377 18.394 - 18.489: 99.5957% ( 2) 00:15:08.377 18.489 - 18.584: 99.6406% ( 6) 00:15:08.377 18.584 - 18.679: 99.6780% ( 5) 00:15:08.377 18.679 - 18.773: 99.6930% ( 2) 00:15:08.377 18.773 - 18.868: 99.7080% ( 2) 00:15:08.377 18.868 - 18.963: 99.7230% ( 2) 00:15:08.377 18.963 - 19.058: 99.7304% ( 1) 00:15:08.377 19.058 - 19.153: 99.7529% ( 3) 00:15:08.377 19.153 - 19.247: 99.7604% ( 1) 00:15:08.377 19.437 - 19.532: 99.7754% ( 2) 00:15:08.377 19.532 - 19.627: 99.7903% ( 2) 00:15:08.377 19.816 - 19.911: 99.8053% ( 2) 00:15:08.377 19.911 - 20.006: 99.8203% ( 2) 00:15:08.377 20.006 - 20.101: 99.8353% ( 2) 00:15:08.377 20.101 - 20.196: 99.8428% ( 1) 00:15:08.377 20.290 - 20.385: 99.8502% ( 1) 00:15:08.377 20.385 - 20.480: 99.8577% ( 1) 00:15:08.377 20.575 - 20.670: 99.8727% ( 2) 00:15:08.377 24.841 - 25.031: 99.8802% ( 1) 00:15:08.377 26.927 - 27.117: 99.8877% ( 1) 00:15:08.377 27.117 - 27.307: 99.8952% ( 1) 00:15:08.377 27.876 - 28.065: 99.9027% ( 1) 00:15:08.377 30.910 - 31.099: 99.9101% ( 1) 00:15:08.377 3980.705 - 4004.978: 99.9626% ( 7) 00:15:08.377 4004.978 - 4029.250: 100.0000% ( 5) 00:15:08.377 00:15:08.377 Complete histogram 00:15:08.377 ================== 00:15:08.377 Range in us Cumulative Count 00:15:08.377 2.050 - 2.062: 0.1872% ( 25) 00:15:08.377 2.062 - 2.074: 30.8499% ( 4095) 00:15:08.377 2.074 - 2.086: 46.2299% ( 2054) 00:15:08.377 2.086 - 2.098: 48.5586% ( 311) 00:15:08.377 2.098 - 2.110: 59.3411% ( 1440) 00:15:08.377 2.110 - 2.121: 63.0475% ( 495) 00:15:08.377 2.121 - 2.133: 66.4845% ( 459) 00:15:08.377 2.133 - 2.145: 79.0640% ( 1680) 00:15:08.377 2.145 - 2.157: 82.2164% ( 421) 00:15:08.377 2.157 - 2.169: 84.2381% ( 270) 00:15:08.377 2.169 - 2.181: 88.0794% ( 513) 00:15:08.377 2.181 - 2.193: 89.4946% ( 189) 00:15:08.377 2.193 - 2.204: 90.2059% ( 95) 00:15:08.377 2.204 - 2.216: 91.3890% ( 158) 00:15:08.377 2.216 - 2.228: 93.2984% ( 255) 00:15:08.377 2.228 - 2.240: 94.2643% ( 129) 00:15:08.377 2.240 - 2.252: 94.6911% ( 57) 00:15:08.377 2.252 - 2.264: 94.8559% ( 22) 00:15:08.377 2.264 - 2.276: 95.0281% ( 23) 00:15:08.377 2.276 - 2.287: 95.1554% ( 17) 00:15:08.377 2.287 - 2.299: 95.4025% ( 33) 00:15:08.377 2.299 - 2.311: 95.5822% ( 24) 00:15:08.377 2.311 - 2.323: 95.6645% ( 11) 00:15:08.377 2.323 - 2.335: 95.7170% ( 7) 00:15:08.377 2.335 - 2.347: 95.7844% ( 9) 00:15:08.377 2.347 - 2.359: 95.9416% ( 21) 00:15:08.377 2.359 - 2.370: 96.2112% ( 36) 00:15:08.377 2.370 - 2.382: 96.5107% ( 40) 00:15:08.377 2.382 - 2.394: 96.8701% ( 48) 00:15:08.377 2.394 - 2.406: 97.1097% ( 32) 00:15:08.377 2.406 - 2.418: 97.2969% ( 25) 00:15:08.377 2.418 - 2.430: 97.4916% ( 26) 00:15:08.377 2.430 - 2.441: 97.5814% ( 12) 00:15:08.377 2.441 - 2.453: 97.6488% ( 9) 00:15:08.377 2.453 - 2.465: 97.7312% ( 11) 00:15:08.377 2.465 - 2.477: 97.7836% ( 7) 00:15:08.377 2.477 - 2.489: 97.7986% ( 2) 00:15:08.377 2.489 - 2.501: 97.8061% ( 1) 00:15:08.377 2.501 - 2.513: 97.8360% ( 4) 00:15:08.377 2.513 - 2.524: 97.8510% ( 2) 00:15:08.377 2.524 - 2.536: 97.8660% ( 2) 00:15:08.377 2.536 - 2.548: 97.8809% ( 2) 00:15:08.377 2.548 - 2.560: 97.9259% ( 6) 00:15:08.377 2.560 - 2.572: 97.9483% ( 3) 00:15:08.377 2.572 - 2.584: 97.9633% ( 2) 00:15:08.377 2.596 - 2.607: 97.9708% ( 1) 00:15:08.377 2.607 - 2.619: 97.9858% ( 2) 00:15:08.377 2.631 - 2.643: 98.0082% ( 3) 00:15:08.377 2.655 - 2.667: 98.0232% ( 2) 00:15:08.377 2.667 - 2.679: 98.0307% ( 1) 00:15:08.377 2.679 - 2.690: 98.0457% ( 2) 00:15:08.377 2.690 - 2.702: 98.0607% ( 2) 00:15:08.377 2.702 - 2.714: 98.0831% ( 3) 00:15:08.377 2.714 - 2.726: 98.0906% ( 1) 00:15:08.377 2.738 - 2.750: 98.0981% ( 1) 00:15:08.377 2.750 - 2.761: 98.1056% ( 1) 00:15:08.377 2.761 - 2.773: 98.1206% ( 2) 00:15:08.377 2.773 - 2.785: 98.1280% ( 1) 00:15:08.377 2.809 - 2.821: 98.1355% ( 1) 00:15:08.377 2.821 - 2.833: 98.1505% ( 2) 00:15:08.377 2.833 - 2.844: 98.1655% ( 2) 00:15:08.377 2.844 - 2.856: 98.1730% ( 1) 00:15:08.377 2.880 - 2.892: 98.1805% ( 1) 00:15:08.377 2.892 - 2.904: 98.1879% ( 1) 00:15:08.377 2.904 - 2.916: 98.1954% ( 1) 00:15:08.377 2.916 - 2.927: 98.2179% ( 3) 00:15:08.377 2.939 - 2.951: 98.2254% ( 1) 00:15:08.377 2.987 - 2.999: 98.2329% ( 1) 00:15:08.377 2.999 - 3.010: 98.2404% ( 1) 00:15:08.377 3.010 - 3.022: 98.2478% ( 1) 00:15:08.377 3.022 - 3.034: 98.2553% ( 1) 00:15:08.377 3.034 - 3.058: 98.2853% ( 4) 00:15:08.377 3.058 - 3.081: 98.2928% ( 1) 00:15:08.377 3.105 - 3.129: 98.3003% ( 1) 00:15:08.377 3.129 - 3.153: 98.3377% ( 5) 00:15:08.377 3.153 - 3.176: 98.3602% ( 3) 00:15:08.377 3.176 - 3.200: 98.3677% ( 1) 00:15:08.377 3.200 - 3.224: 98.3826% ( 2) 00:15:08.377 3.224 - 3.247: 98.4051% ( 3) 00:15:08.377 3.247 - 3.271: 98.4500% ( 6) 00:15:08.377 3.271 - 3.295: 98.4725% ( 3) 00:15:08.377 3.295 - 3.319: 98.4875% ( 2) 00:15:08.377 3.319 - 3.342: 98.5024% ( 2) 00:15:08.377 3.342 - 3.366: 98.5399% ( 5) 00:15:08.377 3.366 - 3.390: 98.5548% ( 2) 00:15:08.377 3.390 - 3.413: 98.5623% ( 1) 00:15:08.377 3.413 - 3.437: 98.5848% ( 3) 00:15:08.377 3.437 - 3.461: 98.5923% ( 1) 00:15:08.377 3.461 - 3.484: 98.6073% ( 2) 00:15:08.377 3.484 - 3.508: 98.6148% ( 1) 00:15:08.377 3.508 - 3.532: 98.6447% ( 4) 00:15:08.377 3.556 - 3.579: 98.6522% ( 1) 00:15:08.377 3.603 - 3.627: 98.6747% ( 3) 00:15:08.377 3.650 - 3.674: 98.6821% ( 1) 00:15:08.377 3.674 - 3.698: 98.6896% ( 1) 00:15:08.377 3.721 - 3.745: 98.6971% ( 1) 00:15:08.377 3.745 - 3.769: 98.7121% ( 2) 00:15:08.377 3.840 - 3.864: 98.7196% ( 1) 00:15:08.377 3.864 - 3.887: 98.7271% ( 1) 00:15:08.377 3.887 - 3.911: 98.7346% ( 1) 00:15:08.377 3.935 - 3.959: 98.7420% ( 1) 00:15:08.377 4.006 - 4.030: 98.7495% ( 1) 00:15:08.377 4.053 - 4.077: 98.7570% ( 1) 00:15:08.377 4.077 - 4.101: 98.7720% ( 2) 00:15:08.377 4.836 - 4.859: 98.7795% ( 1) 00:15:08.377 4.859 - 4.883: 98.7870% ( 1) 00:15:08.377 4.954 - 4.978: 98.7945% ( 1) 00:15:08.377 5.001 - 5.025: 98.8019% ( 1) 00:15:08.377 5.191 - 5.215: 98.8094% ( 1) 00:15:08.377 5.286 - 5.310: 98.8169% ( 1) 00:15:08.377 5.831 - 5.855: 98.8244% ( 1) 00:15:08.377 5.879 - 5.902: 98.8319% ( 1) 00:15:08.377 5.997 - 6.021: 98.8394% ( 1) 00:15:08.377 6.021 - 6.044: 98.8544% ( 2) 00:15:08.377 6.044 - 6.068: 98.8618% ( 1) 00:15:08.377 6.116 - 6.163: 98.8693% ( 1) 00:15:08.377 6.258 - 6.305: 98.8768% ( 1) 00:15:08.377 6.305 - 6.353: 98.8843% ( 1) 00:15:08.377 6.400 - 6.447: 98.9068% ( 3) 00:15:08.377 6.495 - 6.542: 98.9143% ( 1) 00:15:08.377 6.827 - 6.874: 98.9218% ( 1) 00:15:08.377 6.874 - 6.921: 98.9292% ( 1) 00:15:08.377 6.921 - 6.969: 98.9367% ( 1) 00:15:08.377 7.206 - 7.253: 98.9442% ( 1) 00:15:08.377 9.766 - 9.813: 98.9517% ( 1) 00:15:08.377 15.455 - 15.550: 98.9667% ( 2) 00:15:08.377 15.550 - 15.644: 98.9742% ( 1) 00:15:08.377 15.644 - 15.739: 98.9966% ( 3) 00:15:08.377 15.834 - 15.929: 99.0041% ( 1) 00:15:08.377 15.929 - 16.024: 99.0266% ( 3) 00:15:08.377 16.024 - 16.119: 99.0341% ( 1) 00:15:08.377 16.119 - 16.213: 99.0416% ( 1) 00:15:08.377 16.308 - 16.403: 99.0565% ( 2) 00:15:08.377 16.403 - 16.498: 99.0790% ( 3) 00:15:08.377 16.498 - 16.593: 99.1015% ( 3) 00:15:08.377 16.593 - 16.687: 99.1239% ( 3) 00:15:08.377 16.687 - 16.782: 99.1689% ( 6) 00:15:08.377 16.782 - 16.877: 99.1838% ( 2) 00:15:08.377 16.877 - 16.972: 99.1988% ( 2) 00:15:08.377 16.972 - 17.067: 99.2437%[2024-07-13 15:26:38.812704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.377 ( 6) 00:15:08.377 17.067 - 17.161: 99.2812% ( 5) 00:15:08.377 17.161 - 17.256: 99.2961% ( 2) 00:15:08.377 17.446 - 17.541: 99.3186% ( 3) 00:15:08.377 17.730 - 17.825: 99.3261% ( 1) 00:15:08.377 17.825 - 17.920: 99.3336% ( 1) 00:15:08.377 18.015 - 18.110: 99.3411% ( 1) 00:15:08.377 18.110 - 18.204: 99.3486% ( 1) 00:15:08.377 18.204 - 18.299: 99.3560% ( 1) 00:15:08.378 18.299 - 18.394: 99.3710% ( 2) 00:15:08.378 18.394 - 18.489: 99.3785% ( 1) 00:15:08.378 18.489 - 18.584: 99.3860% ( 1) 00:15:08.378 18.679 - 18.773: 99.3935% ( 1) 00:15:08.378 19.058 - 19.153: 99.4010% ( 1) 00:15:08.378 23.230 - 23.324: 99.4085% ( 1) 00:15:08.378 3422.436 - 3446.708: 99.4159% ( 1) 00:15:08.378 3980.705 - 4004.978: 99.7304% ( 42) 00:15:08.378 4004.978 - 4029.250: 100.0000% ( 36) 00:15:08.378 00:15:08.378 15:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:08.378 15:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:08.378 15:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:08.378 15:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:08.378 15:26:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.378 [ 00:15:08.378 { 00:15:08.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.378 "subtype": "Discovery", 00:15:08.378 "listen_addresses": [], 00:15:08.378 "allow_any_host": true, 00:15:08.378 "hosts": [] 00:15:08.378 }, 00:15:08.378 { 00:15:08.378 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.378 "subtype": "NVMe", 00:15:08.378 "listen_addresses": [ 00:15:08.378 { 00:15:08.378 "trtype": "VFIOUSER", 00:15:08.378 "adrfam": "IPv4", 00:15:08.378 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.378 "trsvcid": "0" 00:15:08.378 } 00:15:08.378 ], 00:15:08.378 "allow_any_host": true, 00:15:08.378 "hosts": [], 00:15:08.378 "serial_number": "SPDK1", 00:15:08.378 "model_number": "SPDK bdev Controller", 00:15:08.378 "max_namespaces": 32, 00:15:08.378 "min_cntlid": 1, 00:15:08.378 "max_cntlid": 65519, 00:15:08.378 "namespaces": [ 00:15:08.378 { 00:15:08.378 "nsid": 1, 00:15:08.378 "bdev_name": "Malloc1", 00:15:08.378 "name": "Malloc1", 00:15:08.378 "nguid": "7257432BC26649149B34C21E619C9321", 00:15:08.378 "uuid": "7257432b-c266-4914-9b34-c21e619c9321" 00:15:08.378 }, 00:15:08.378 { 00:15:08.378 "nsid": 2, 00:15:08.378 "bdev_name": "Malloc3", 00:15:08.378 "name": "Malloc3", 00:15:08.378 "nguid": "52ADAE666FEE4FD3B82A9805003145FA", 00:15:08.378 "uuid": "52adae66-6fee-4fd3-b82a-9805003145fa" 00:15:08.378 } 00:15:08.378 ] 00:15:08.378 }, 00:15:08.378 { 00:15:08.378 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.378 "subtype": "NVMe", 00:15:08.378 "listen_addresses": [ 00:15:08.378 { 00:15:08.378 "trtype": "VFIOUSER", 00:15:08.378 "adrfam": "IPv4", 00:15:08.378 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.378 "trsvcid": "0" 00:15:08.378 } 00:15:08.378 ], 00:15:08.378 "allow_any_host": true, 00:15:08.378 "hosts": [], 00:15:08.378 "serial_number": "SPDK2", 00:15:08.378 "model_number": "SPDK bdev Controller", 00:15:08.378 "max_namespaces": 32, 00:15:08.378 "min_cntlid": 1, 00:15:08.378 "max_cntlid": 65519, 00:15:08.378 "namespaces": [ 00:15:08.378 { 00:15:08.378 "nsid": 1, 00:15:08.378 "bdev_name": "Malloc2", 00:15:08.378 "name": "Malloc2", 00:15:08.378 "nguid": "01103B8045364B709915F0062D2F74C3", 00:15:08.378 "uuid": "01103b80-4536-4b70-9915-f0062d2f74c3" 00:15:08.378 } 00:15:08.378 ] 00:15:08.378 } 00:15:08.378 ] 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1071021 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:08.378 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:08.693 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.693 [2024-07-13 15:26:39.289343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.693 Malloc4 00:15:08.693 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:08.950 [2024-07-13 15:26:39.633906] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.950 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.950 Asynchronous Event Request test 00:15:08.950 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.950 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.950 Registering asynchronous event callbacks... 00:15:08.950 Starting namespace attribute notice tests for all controllers... 00:15:08.950 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:08.950 aer_cb - Changed Namespace 00:15:08.950 Cleaning up... 00:15:09.209 [ 00:15:09.209 { 00:15:09.209 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:09.209 "subtype": "Discovery", 00:15:09.209 "listen_addresses": [], 00:15:09.209 "allow_any_host": true, 00:15:09.209 "hosts": [] 00:15:09.209 }, 00:15:09.209 { 00:15:09.209 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:09.209 "subtype": "NVMe", 00:15:09.209 "listen_addresses": [ 00:15:09.209 { 00:15:09.209 "trtype": "VFIOUSER", 00:15:09.209 "adrfam": "IPv4", 00:15:09.209 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:09.209 "trsvcid": "0" 00:15:09.209 } 00:15:09.209 ], 00:15:09.209 "allow_any_host": true, 00:15:09.209 "hosts": [], 00:15:09.209 "serial_number": "SPDK1", 00:15:09.209 "model_number": "SPDK bdev Controller", 00:15:09.209 "max_namespaces": 32, 00:15:09.209 "min_cntlid": 1, 00:15:09.209 "max_cntlid": 65519, 00:15:09.209 "namespaces": [ 00:15:09.209 { 00:15:09.209 "nsid": 1, 00:15:09.209 "bdev_name": "Malloc1", 00:15:09.209 "name": "Malloc1", 00:15:09.209 "nguid": "7257432BC26649149B34C21E619C9321", 00:15:09.209 "uuid": "7257432b-c266-4914-9b34-c21e619c9321" 00:15:09.209 }, 00:15:09.209 { 00:15:09.209 "nsid": 2, 00:15:09.209 "bdev_name": "Malloc3", 00:15:09.209 "name": "Malloc3", 00:15:09.209 "nguid": "52ADAE666FEE4FD3B82A9805003145FA", 00:15:09.209 "uuid": "52adae66-6fee-4fd3-b82a-9805003145fa" 00:15:09.209 } 00:15:09.209 ] 00:15:09.209 }, 00:15:09.209 { 00:15:09.209 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:09.209 "subtype": "NVMe", 00:15:09.209 "listen_addresses": [ 00:15:09.209 { 00:15:09.209 "trtype": "VFIOUSER", 00:15:09.209 "adrfam": "IPv4", 00:15:09.209 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:09.209 "trsvcid": "0" 00:15:09.209 } 00:15:09.209 ], 00:15:09.209 "allow_any_host": true, 00:15:09.209 "hosts": [], 00:15:09.209 "serial_number": "SPDK2", 00:15:09.209 "model_number": "SPDK bdev Controller", 00:15:09.209 "max_namespaces": 32, 00:15:09.209 "min_cntlid": 1, 00:15:09.209 "max_cntlid": 65519, 00:15:09.209 "namespaces": [ 00:15:09.209 { 00:15:09.209 "nsid": 1, 00:15:09.209 "bdev_name": "Malloc2", 00:15:09.209 "name": "Malloc2", 00:15:09.209 "nguid": "01103B8045364B709915F0062D2F74C3", 00:15:09.209 "uuid": "01103b80-4536-4b70-9915-f0062d2f74c3" 00:15:09.209 }, 00:15:09.209 { 00:15:09.209 "nsid": 2, 00:15:09.209 "bdev_name": "Malloc4", 00:15:09.209 "name": "Malloc4", 00:15:09.209 "nguid": "AFD50EAA0E4148B9A1CBFB5660AA373F", 00:15:09.209 "uuid": "afd50eaa-0e41-48b9-a1cb-fb5660aa373f" 00:15:09.209 } 00:15:09.209 ] 00:15:09.209 } 00:15:09.209 ] 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1071021 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1065337 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1065337 ']' 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1065337 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1065337 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1065337' 00:15:09.209 killing process with pid 1065337 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1065337 00:15:09.209 15:26:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1065337 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1071163 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1071163' 00:15:09.775 Process pid: 1071163 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1071163 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1071163 ']' 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.775 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:09.775 [2024-07-13 15:26:40.320506] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:09.775 [2024-07-13 15:26:40.321601] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:09.775 [2024-07-13 15:26:40.321669] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.775 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.775 [2024-07-13 15:26:40.354891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:09.775 [2024-07-13 15:26:40.387032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.775 [2024-07-13 15:26:40.477059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.775 [2024-07-13 15:26:40.477116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.775 [2024-07-13 15:26:40.477133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.775 [2024-07-13 15:26:40.477147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.775 [2024-07-13 15:26:40.477159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.775 [2024-07-13 15:26:40.477233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.775 [2024-07-13 15:26:40.477312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.775 [2024-07-13 15:26:40.479889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.775 [2024-07-13 15:26:40.479914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.034 [2024-07-13 15:26:40.589346] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:10.034 [2024-07-13 15:26:40.589598] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:10.034 [2024-07-13 15:26:40.589892] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:10.034 [2024-07-13 15:26:40.590491] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:10.034 [2024-07-13 15:26:40.590734] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:10.034 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.034 15:26:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:10.034 15:26:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:10.968 15:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:11.226 15:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:11.226 15:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:11.226 15:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.226 15:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:11.226 15:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:11.485 Malloc1 00:15:11.485 15:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:11.742 15:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:12.305 15:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:12.305 15:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.305 15:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:12.305 15:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:12.562 Malloc2 00:15:12.562 15:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:13.126 15:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:13.126 15:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1071163 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1071163 ']' 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1071163 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071163 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071163' 00:15:13.384 killing process with pid 1071163 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1071163 00:15:13.384 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1071163 00:15:13.642 15:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:13.642 15:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:13.642 00:15:13.642 real 0m53.272s 00:15:13.642 user 3m30.146s 00:15:13.642 sys 0m4.501s 00:15:13.642 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.642 15:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:13.642 ************************************ 00:15:13.642 END TEST nvmf_vfio_user 00:15:13.642 ************************************ 00:15:13.900 15:26:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:13.901 15:26:44 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:13.901 15:26:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.901 15:26:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.901 15:26:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.901 ************************************ 00:15:13.901 START TEST nvmf_vfio_user_nvme_compliance 00:15:13.901 ************************************ 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:13.901 * Looking for test storage... 00:15:13.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1071756 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1071756' 00:15:13.901 Process pid: 1071756 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1071756 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1071756 ']' 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.901 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:13.901 [2024-07-13 15:26:44.567675] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:13.901 [2024-07-13 15:26:44.567754] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.901 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.901 [2024-07-13 15:26:44.598761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:13.901 [2024-07-13 15:26:44.630604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.159 [2024-07-13 15:26:44.722128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.159 [2024-07-13 15:26:44.722190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.159 [2024-07-13 15:26:44.722206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.159 [2024-07-13 15:26:44.722219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.159 [2024-07-13 15:26:44.722231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.159 [2024-07-13 15:26:44.722319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.159 [2024-07-13 15:26:44.722354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.159 [2024-07-13 15:26:44.722372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.159 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.159 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:14.159 15:26:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.094 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.352 malloc0 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.352 15:26:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:15.352 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.352 00:15:15.352 00:15:15.352 CUnit - A unit testing framework for C - Version 2.1-3 00:15:15.352 http://cunit.sourceforge.net/ 00:15:15.352 00:15:15.352 00:15:15.352 Suite: nvme_compliance 00:15:15.352 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-13 15:26:46.070056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.352 [2024-07-13 15:26:46.071554] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:15.352 [2024-07-13 15:26:46.071579] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:15.352 [2024-07-13 15:26:46.071591] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:15.352 [2024-07-13 15:26:46.073072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.352 passed 00:15:15.610 Test: admin_identify_ctrlr_verify_fused ...[2024-07-13 15:26:46.156710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.610 [2024-07-13 15:26:46.162751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.610 passed 00:15:15.610 Test: admin_identify_ns ...[2024-07-13 15:26:46.249266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.610 [2024-07-13 15:26:46.306897] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:15.610 [2024-07-13 15:26:46.314898] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:15.610 [2024-07-13 15:26:46.336007] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.610 passed 00:15:15.868 Test: admin_get_features_mandatory_features ...[2024-07-13 15:26:46.421321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.868 [2024-07-13 15:26:46.424345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.868 passed 00:15:15.868 Test: admin_get_features_optional_features ...[2024-07-13 15:26:46.507863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:15.868 [2024-07-13 15:26:46.510891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:15.868 passed 00:15:15.868 Test: admin_set_features_number_of_queues ...[2024-07-13 15:26:46.595472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.126 [2024-07-13 15:26:46.700024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.126 passed 00:15:16.126 Test: admin_get_log_page_mandatory_logs ...[2024-07-13 15:26:46.785136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.126 [2024-07-13 15:26:46.788154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.126 passed 00:15:16.126 Test: admin_get_log_page_with_lpo ...[2024-07-13 15:26:46.870505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.384 [2024-07-13 15:26:46.941900] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:16.384 [2024-07-13 15:26:46.954961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.384 passed 00:15:16.384 Test: fabric_property_get ...[2024-07-13 15:26:47.036077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.384 [2024-07-13 15:26:47.037383] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:16.384 [2024-07-13 15:26:47.039099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.384 passed 00:15:16.384 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-13 15:26:47.126682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.384 [2024-07-13 15:26:47.128014] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:16.384 [2024-07-13 15:26:47.129708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.642 passed 00:15:16.642 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-13 15:26:47.214880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.642 [2024-07-13 15:26:47.296876] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:16.642 [2024-07-13 15:26:47.312873] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:16.642 [2024-07-13 15:26:47.318115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.642 passed 00:15:16.642 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-13 15:26:47.406634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.642 [2024-07-13 15:26:47.408030] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:16.900 [2024-07-13 15:26:47.409664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.900 passed 00:15:16.900 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-13 15:26:47.492845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:16.900 [2024-07-13 15:26:47.567874] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:16.900 [2024-07-13 15:26:47.591874] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:16.900 [2024-07-13 15:26:47.596981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:16.900 passed 00:15:17.158 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-13 15:26:47.685712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.158 [2024-07-13 15:26:47.687030] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:17.158 [2024-07-13 15:26:47.687077] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:17.158 [2024-07-13 15:26:47.688732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.158 passed 00:15:17.158 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-13 15:26:47.770893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.158 [2024-07-13 15:26:47.864874] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:17.158 [2024-07-13 15:26:47.872887] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:17.158 [2024-07-13 15:26:47.880876] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:17.158 [2024-07-13 15:26:47.888879] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:17.158 [2024-07-13 15:26:47.917993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.416 passed 00:15:17.416 Test: admin_create_io_sq_verify_pc ...[2024-07-13 15:26:48.002602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.416 [2024-07-13 15:26:48.018889] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:17.416 [2024-07-13 15:26:48.036038] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.416 passed 00:15:17.416 Test: admin_create_io_qp_max_qps ...[2024-07-13 15:26:48.122592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.789 [2024-07-13 15:26:49.219882] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:19.047 [2024-07-13 15:26:49.611328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.047 passed 00:15:19.047 Test: admin_create_io_sq_shared_cq ...[2024-07-13 15:26:49.694660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.306 [2024-07-13 15:26:49.825878] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:19.306 [2024-07-13 15:26:49.865980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.306 passed 00:15:19.306 00:15:19.306 Run Summary: Type Total Ran Passed Failed Inactive 00:15:19.306 suites 1 1 n/a 0 0 00:15:19.306 tests 18 18 18 0 0 00:15:19.306 asserts 360 360 360 0 n/a 00:15:19.306 00:15:19.306 Elapsed time = 1.576 seconds 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1071756 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1071756 ']' 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1071756 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071756 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071756' 00:15:19.306 killing process with pid 1071756 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1071756 00:15:19.306 15:26:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1071756 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:19.565 00:15:19.565 real 0m5.730s 00:15:19.565 user 0m16.077s 00:15:19.565 sys 0m0.542s 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:19.565 ************************************ 00:15:19.565 END TEST nvmf_vfio_user_nvme_compliance 00:15:19.565 ************************************ 00:15:19.565 15:26:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:19.565 15:26:50 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:19.565 15:26:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:19.565 15:26:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.565 15:26:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:19.565 ************************************ 00:15:19.565 START TEST nvmf_vfio_user_fuzz 00:15:19.565 ************************************ 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:19.565 * Looking for test storage... 00:15:19.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.565 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1072480 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1072480' 00:15:19.566 Process pid: 1072480 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1072480 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1072480 ']' 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.566 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:20.133 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.133 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:20.133 15:26:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.068 malloc0 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:21.068 15:26:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:53.139 Fuzzing completed. Shutting down the fuzz application 00:15:53.139 00:15:53.139 Dumping successful admin opcodes: 00:15:53.139 8, 9, 10, 24, 00:15:53.139 Dumping successful io opcodes: 00:15:53.139 0, 00:15:53.139 NS: 0x200003a1ef00 I/O qp, Total commands completed: 644835, total successful commands: 2504, random_seed: 3824758016 00:15:53.139 NS: 0x200003a1ef00 admin qp, Total commands completed: 83491, total successful commands: 666, random_seed: 3108724672 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1072480 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1072480 ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1072480 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072480 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072480' 00:15:53.139 killing process with pid 1072480 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1072480 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1072480 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:53.139 00:15:53.139 real 0m32.185s 00:15:53.139 user 0m32.961s 00:15:53.139 sys 0m26.733s 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.139 15:27:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.139 ************************************ 00:15:53.139 END TEST nvmf_vfio_user_fuzz 00:15:53.139 ************************************ 00:15:53.139 15:27:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.139 15:27:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:53.139 15:27:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.139 15:27:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.139 15:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.139 ************************************ 00:15:53.139 START TEST nvmf_host_management 00:15:53.139 ************************************ 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:53.139 * Looking for test storage... 00:15:53.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.139 15:27:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:53.705 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:53.705 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:53.705 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:53.705 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:53.705 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:53.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:15:53.706 00:15:53.706 --- 10.0.0.2 ping statistics --- 00:15:53.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.706 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:53.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:15:53.706 00:15:53.706 --- 10.0.0.1 ping statistics --- 00:15:53.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.706 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:53.706 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1078419 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1078419 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1078419 ']' 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.965 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.965 [2024-07-13 15:27:24.545035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:53.965 [2024-07-13 15:27:24.545137] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.965 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.965 [2024-07-13 15:27:24.583756] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:53.965 [2024-07-13 15:27:24.610562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.965 [2024-07-13 15:27:24.701191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.965 [2024-07-13 15:27:24.701267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.965 [2024-07-13 15:27:24.701285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.965 [2024-07-13 15:27:24.701296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.965 [2024-07-13 15:27:24.701321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.965 [2024-07-13 15:27:24.701385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.965 [2024-07-13 15:27:24.701446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.965 [2024-07-13 15:27:24.701511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:53.965 [2024-07-13 15:27:24.701514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.224 [2024-07-13 15:27:24.858766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.224 Malloc0 00:15:54.224 [2024-07-13 15:27:24.920102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1078462 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1078462 /var/tmp/bdevperf.sock 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1078462 ']' 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:54.224 { 00:15:54.224 "params": { 00:15:54.224 "name": "Nvme$subsystem", 00:15:54.224 "trtype": "$TEST_TRANSPORT", 00:15:54.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.224 "adrfam": "ipv4", 00:15:54.224 "trsvcid": "$NVMF_PORT", 00:15:54.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.224 "hdgst": ${hdgst:-false}, 00:15:54.224 "ddgst": ${ddgst:-false} 00:15:54.224 }, 00:15:54.224 "method": "bdev_nvme_attach_controller" 00:15:54.224 } 00:15:54.224 EOF 00:15:54.224 )") 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:54.224 15:27:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:54.224 "params": { 00:15:54.224 "name": "Nvme0", 00:15:54.224 "trtype": "tcp", 00:15:54.224 "traddr": "10.0.0.2", 00:15:54.224 "adrfam": "ipv4", 00:15:54.224 "trsvcid": "4420", 00:15:54.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:54.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:54.224 "hdgst": false, 00:15:54.224 "ddgst": false 00:15:54.224 }, 00:15:54.224 "method": "bdev_nvme_attach_controller" 00:15:54.224 }' 00:15:54.483 [2024-07-13 15:27:25.002085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:54.483 [2024-07-13 15:27:25.002174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078462 ] 00:15:54.483 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.483 [2024-07-13 15:27:25.035109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:54.483 [2024-07-13 15:27:25.064164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.483 [2024-07-13 15:27:25.151423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.741 Running I/O for 10 seconds... 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:54.741 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=513 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 513 -ge 100 ']' 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.000 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:55.000 [2024-07-13 15:27:25.727307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.000 [2024-07-13 15:27:25.727353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.727371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.000 [2024-07-13 15:27:25.727386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.727399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.000 [2024-07-13 15:27:25.727413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.727427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.000 [2024-07-13 15:27:25.727440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.727453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddb50 is same with the state(5) to be set 00:15:55.000 [2024-07-13 15:27:25.728321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.000 [2024-07-13 15:27:25.728356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.728399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.000 [2024-07-13 15:27:25.728415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.728432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.000 [2024-07-13 15:27:25.728446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.728465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.000 [2024-07-13 15:27:25.728480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.000 [2024-07-13 15:27:25.728496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.728971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.728986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.001 [2024-07-13 15:27:25.729479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.001 [2024-07-13 15:27:25.729495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.729969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.729984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:55.002 [2024-07-13 15:27:25.730354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.002 [2024-07-13 15:27:25.730435] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9eee10 was disconnected and freed. reset controller. 00:15:55.002 15:27:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.002 15:27:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:55.002 [2024-07-13 15:27:25.731564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:55.002 task offset: 73728 on job bdev=Nvme0n1 fails 00:15:55.002 00:15:55.002 Latency(us) 00:15:55.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:55.002 Job: Nvme0n1 ended in about 0.40 seconds with error 00:15:55.002 Verification LBA range: start 0x0 length 0x400 00:15:55.002 Nvme0n1 : 0.40 1427.24 89.20 158.58 0.00 39224.68 2536.49 36505.98 00:15:55.002 =================================================================================================================== 00:15:55.002 Total : 1427.24 89.20 158.58 0.00 39224.68 2536.49 36505.98 00:15:55.002 [2024-07-13 15:27:25.733424] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:55.002 [2024-07-13 15:27:25.733467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ddb50 (9): Bad file descriptor 00:15:55.002 [2024-07-13 15:27:25.746418] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1078462 00:15:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1078462) - No such process 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.381 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.381 { 00:15:56.381 "params": { 00:15:56.381 "name": "Nvme$subsystem", 00:15:56.381 "trtype": "$TEST_TRANSPORT", 00:15:56.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.382 "adrfam": "ipv4", 00:15:56.382 "trsvcid": "$NVMF_PORT", 00:15:56.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.382 "hdgst": ${hdgst:-false}, 00:15:56.382 "ddgst": ${ddgst:-false} 00:15:56.382 }, 00:15:56.382 "method": "bdev_nvme_attach_controller" 00:15:56.382 } 00:15:56.382 EOF 00:15:56.382 )") 00:15:56.382 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:56.382 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:56.382 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:56.382 15:27:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.382 "params": { 00:15:56.382 "name": "Nvme0", 00:15:56.382 "trtype": "tcp", 00:15:56.382 "traddr": "10.0.0.2", 00:15:56.382 "adrfam": "ipv4", 00:15:56.382 "trsvcid": "4420", 00:15:56.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:56.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:56.382 "hdgst": false, 00:15:56.382 "ddgst": false 00:15:56.382 }, 00:15:56.382 "method": "bdev_nvme_attach_controller" 00:15:56.382 }' 00:15:56.382 [2024-07-13 15:27:26.780094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:15:56.382 [2024-07-13 15:27:26.780196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078737 ] 00:15:56.382 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.382 [2024-07-13 15:27:26.811827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:56.382 [2024-07-13 15:27:26.841080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.382 [2024-07-13 15:27:26.928163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.382 Running I/O for 1 seconds... 00:15:57.759 00:15:57.759 Latency(us) 00:15:57.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.759 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:57.759 Verification LBA range: start 0x0 length 0x400 00:15:57.759 Nvme0n1 : 1.04 1540.52 96.28 0.00 0.00 40902.25 9611.95 33204.91 00:15:57.759 =================================================================================================================== 00:15:57.759 Total : 1540.52 96.28 0.00 0.00 40902.25 9611.95 33204.91 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.759 rmmod nvme_tcp 00:15:57.759 rmmod nvme_fabrics 00:15:57.759 rmmod nvme_keyring 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1078419 ']' 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1078419 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1078419 ']' 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1078419 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1078419 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1078419' 00:15:57.759 killing process with pid 1078419 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1078419 00:15:57.759 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1078419 00:15:58.017 [2024-07-13 15:27:28.664398] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.017 15:27:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.586 15:27:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.586 15:27:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:00.586 00:16:00.586 real 0m8.269s 00:16:00.586 user 0m18.370s 00:16:00.586 sys 0m2.550s 00:16:00.586 15:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.586 15:27:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.586 ************************************ 00:16:00.586 END TEST nvmf_host_management 00:16:00.586 ************************************ 00:16:00.586 15:27:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:00.586 15:27:30 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:00.586 15:27:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.586 15:27:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.586 15:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.586 ************************************ 00:16:00.586 START TEST nvmf_lvol 00:16:00.586 ************************************ 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:00.586 * Looking for test storage... 00:16:00.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.586 15:27:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.587 15:27:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.964 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:01.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:02.226 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:02.226 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:02.226 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:16:02.226 00:16:02.226 --- 10.0.0.2 ping statistics --- 00:16:02.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.226 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:16:02.226 00:16:02.226 --- 10.0.0.1 ping statistics --- 00:16:02.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.226 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1080809 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1080809 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1080809 ']' 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.226 15:27:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:02.226 [2024-07-13 15:27:32.953629] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:02.226 [2024-07-13 15:27:32.953711] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.226 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.485 [2024-07-13 15:27:32.993067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:02.485 [2024-07-13 15:27:33.019837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.485 [2024-07-13 15:27:33.107934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.485 [2024-07-13 15:27:33.107997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.485 [2024-07-13 15:27:33.108017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.485 [2024-07-13 15:27:33.108028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.485 [2024-07-13 15:27:33.108038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.485 [2024-07-13 15:27:33.108095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.485 [2024-07-13 15:27:33.108154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.485 [2024-07-13 15:27:33.108157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.485 15:27:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:02.742 [2024-07-13 15:27:33.469944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.742 15:27:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.001 15:27:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:03.001 15:27:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:03.260 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:03.260 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:03.518 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:03.795 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f98a4c4d-e7b8-4652-96a0-a9b17ce11873 00:16:03.795 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f98a4c4d-e7b8-4652-96a0-a9b17ce11873 lvol 20 00:16:04.071 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=63099b49-5f26-41a9-947e-675a31e28a1c 00:16:04.071 15:27:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:04.329 15:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63099b49-5f26-41a9-947e-675a31e28a1c 00:16:04.587 15:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:04.844 [2024-07-13 15:27:35.522393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.844 15:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:05.101 15:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1081236 00:16:05.101 15:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:05.101 15:27:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:05.101 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.036 15:27:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 63099b49-5f26-41a9-947e-675a31e28a1c MY_SNAPSHOT 00:16:06.605 15:27:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37ad363e-4981-431e-9273-51ef02054d08 00:16:06.605 15:27:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 63099b49-5f26-41a9-947e-675a31e28a1c 30 00:16:06.864 15:27:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37ad363e-4981-431e-9273-51ef02054d08 MY_CLONE 00:16:07.121 15:27:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cb9f403c-b4af-459c-86d8-3900f7ff7c91 00:16:07.121 15:27:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cb9f403c-b4af-459c-86d8-3900f7ff7c91 00:16:07.689 15:27:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1081236 00:16:15.808 Initializing NVMe Controllers 00:16:15.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:15.808 Controller IO queue size 128, less than required. 00:16:15.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:15.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:15.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:15.808 Initialization complete. Launching workers. 00:16:15.808 ======================================================== 00:16:15.808 Latency(us) 00:16:15.808 Device Information : IOPS MiB/s Average min max 00:16:15.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10782.00 42.12 11877.03 1409.98 75261.43 00:16:15.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10713.00 41.85 11952.79 1632.78 78356.04 00:16:15.808 ======================================================== 00:16:15.808 Total : 21495.00 83.96 11914.79 1409.98 78356.04 00:16:15.808 00:16:15.808 15:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:15.808 15:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 63099b49-5f26-41a9-947e-675a31e28a1c 00:16:16.066 15:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f98a4c4d-e7b8-4652-96a0-a9b17ce11873 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.323 rmmod nvme_tcp 00:16:16.323 rmmod nvme_fabrics 00:16:16.323 rmmod nvme_keyring 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1080809 ']' 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1080809 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1080809 ']' 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1080809 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.323 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1080809 00:16:16.581 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.581 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.581 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1080809' 00:16:16.581 killing process with pid 1080809 00:16:16.581 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1080809 00:16:16.581 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1080809 00:16:16.841 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.841 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.841 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.841 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.841 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.842 15:27:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.842 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.842 15:27:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:18.751 00:16:18.751 real 0m18.617s 00:16:18.751 user 1m1.499s 00:16:18.751 sys 0m6.481s 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:18.751 ************************************ 00:16:18.751 END TEST nvmf_lvol 00:16:18.751 ************************************ 00:16:18.751 15:27:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:18.751 15:27:49 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:18.751 15:27:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:18.751 15:27:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.751 15:27:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:18.751 ************************************ 00:16:18.751 START TEST nvmf_lvs_grow 00:16:18.751 ************************************ 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:18.751 * Looking for test storage... 00:16:18.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.751 15:27:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.752 15:27:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.010 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.010 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.010 15:27:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.010 15:27:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:20.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:20.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:20.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:20.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.911 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:16:20.912 00:16:20.912 --- 10.0.0.2 ping statistics --- 00:16:20.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.912 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:16:20.912 00:16:20.912 --- 10.0.0.1 ping statistics --- 00:16:20.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.912 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1084485 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1084485 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1084485 ']' 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.912 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:20.912 [2024-07-13 15:27:51.588836] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:20.912 [2024-07-13 15:27:51.588960] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.912 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.912 [2024-07-13 15:27:51.626710] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:20.912 [2024-07-13 15:27:51.658888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.170 [2024-07-13 15:27:51.750051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.170 [2024-07-13 15:27:51.750112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.170 [2024-07-13 15:27:51.750135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.170 [2024-07-13 15:27:51.750148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.170 [2024-07-13 15:27:51.750161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.170 [2024-07-13 15:27:51.750201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.170 15:27:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:21.428 [2024-07-13 15:27:52.108062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:21.428 ************************************ 00:16:21.428 START TEST lvs_grow_clean 00:16:21.428 ************************************ 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:21.428 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:21.686 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:21.686 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:21.943 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=53543b24-e063-4cea-97ab-bc345899ca3c 00:16:21.943 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:21.943 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:22.201 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:22.201 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:22.201 15:27:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 53543b24-e063-4cea-97ab-bc345899ca3c lvol 150 00:16:22.458 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8f4007ec-b170-4887-b4c5-0dab8519c0a7 00:16:22.458 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:22.458 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:22.715 [2024-07-13 15:27:53.408027] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:22.715 [2024-07-13 15:27:53.408117] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:22.715 true 00:16:22.715 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:22.715 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:22.972 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:22.972 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:23.230 15:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8f4007ec-b170-4887-b4c5-0dab8519c0a7 00:16:23.512 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:23.774 [2024-07-13 15:27:54.407109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.774 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1084866 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1084866 /var/tmp/bdevperf.sock 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1084866 ']' 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:24.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.032 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 [2024-07-13 15:27:54.718381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:24.032 [2024-07-13 15:27:54.718455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084866 ] 00:16:24.032 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.032 [2024-07-13 15:27:54.752470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:24.032 [2024-07-13 15:27:54.783495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.290 [2024-07-13 15:27:54.875620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.290 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.290 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:24.290 15:27:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:24.855 Nvme0n1 00:16:24.856 15:27:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:24.856 [ 00:16:24.856 { 00:16:24.856 "name": "Nvme0n1", 00:16:24.856 "aliases": [ 00:16:24.856 "8f4007ec-b170-4887-b4c5-0dab8519c0a7" 00:16:24.856 ], 00:16:24.856 "product_name": "NVMe disk", 00:16:24.856 "block_size": 4096, 00:16:24.856 "num_blocks": 38912, 00:16:24.856 "uuid": "8f4007ec-b170-4887-b4c5-0dab8519c0a7", 00:16:24.856 "assigned_rate_limits": { 00:16:24.856 "rw_ios_per_sec": 0, 00:16:24.856 "rw_mbytes_per_sec": 0, 00:16:24.856 "r_mbytes_per_sec": 0, 00:16:24.856 "w_mbytes_per_sec": 0 00:16:24.856 }, 00:16:24.856 "claimed": false, 00:16:24.856 "zoned": false, 00:16:24.856 "supported_io_types": { 00:16:24.856 "read": true, 00:16:24.856 "write": true, 00:16:24.856 "unmap": true, 00:16:24.856 "flush": true, 00:16:24.856 "reset": true, 00:16:24.856 "nvme_admin": true, 00:16:24.856 "nvme_io": true, 00:16:24.856 "nvme_io_md": false, 00:16:24.856 "write_zeroes": true, 00:16:24.856 "zcopy": false, 00:16:24.856 "get_zone_info": false, 00:16:24.856 "zone_management": false, 00:16:24.856 "zone_append": false, 00:16:24.856 "compare": true, 00:16:24.856 "compare_and_write": true, 00:16:24.856 "abort": true, 00:16:24.856 "seek_hole": false, 00:16:24.856 "seek_data": false, 00:16:24.856 "copy": true, 00:16:24.856 "nvme_iov_md": false 00:16:24.856 }, 00:16:24.856 "memory_domains": [ 00:16:24.856 { 00:16:24.856 "dma_device_id": "system", 00:16:24.856 "dma_device_type": 1 00:16:24.856 } 00:16:24.856 ], 00:16:24.856 "driver_specific": { 00:16:24.856 "nvme": [ 00:16:24.856 { 00:16:24.856 "trid": { 00:16:24.856 "trtype": "TCP", 00:16:24.856 "adrfam": "IPv4", 00:16:24.856 "traddr": "10.0.0.2", 00:16:24.856 "trsvcid": "4420", 00:16:24.856 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:24.856 }, 00:16:24.856 "ctrlr_data": { 00:16:24.856 "cntlid": 1, 00:16:24.856 "vendor_id": "0x8086", 00:16:24.856 "model_number": "SPDK bdev Controller", 00:16:24.856 "serial_number": "SPDK0", 00:16:24.856 "firmware_revision": "24.09", 00:16:24.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:24.856 "oacs": { 00:16:24.856 "security": 0, 00:16:24.856 "format": 0, 00:16:24.856 "firmware": 0, 00:16:24.856 "ns_manage": 0 00:16:24.856 }, 00:16:24.856 "multi_ctrlr": true, 00:16:24.856 "ana_reporting": false 00:16:24.856 }, 00:16:24.856 "vs": { 00:16:24.856 "nvme_version": "1.3" 00:16:24.856 }, 00:16:24.856 "ns_data": { 00:16:24.856 "id": 1, 00:16:24.856 "can_share": true 00:16:24.856 } 00:16:24.856 } 00:16:24.856 ], 00:16:24.856 "mp_policy": "active_passive" 00:16:24.856 } 00:16:24.856 } 00:16:24.856 ] 00:16:24.856 15:27:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1084938 00:16:24.856 15:27:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:24.856 15:27:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:25.114 Running I/O for 10 seconds... 00:16:26.047 Latency(us) 00:16:26.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.047 Nvme0n1 : 1.00 13139.00 51.32 0.00 0.00 0.00 0.00 0.00 00:16:26.047 =================================================================================================================== 00:16:26.047 Total : 13139.00 51.32 0.00 0.00 0.00 0.00 0.00 00:16:26.047 00:16:26.979 15:27:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:26.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.980 Nvme0n1 : 2.00 13281.50 51.88 0.00 0.00 0.00 0.00 0.00 00:16:26.980 =================================================================================================================== 00:16:26.980 Total : 13281.50 51.88 0.00 0.00 0.00 0.00 0.00 00:16:26.980 00:16:27.260 true 00:16:27.260 15:27:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:27.260 15:27:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:27.518 15:27:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:27.518 15:27:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:27.518 15:27:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1084938 00:16:28.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.084 Nvme0n1 : 3.00 13361.00 52.19 0.00 0.00 0.00 0.00 0.00 00:16:28.084 =================================================================================================================== 00:16:28.084 Total : 13361.00 52.19 0.00 0.00 0.00 0.00 0.00 00:16:28.084 00:16:29.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.018 Nvme0n1 : 4.00 13472.75 52.63 0.00 0.00 0.00 0.00 0.00 00:16:29.018 =================================================================================================================== 00:16:29.018 Total : 13472.75 52.63 0.00 0.00 0.00 0.00 0.00 00:16:29.018 00:16:29.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.952 Nvme0n1 : 5.00 13515.80 52.80 0.00 0.00 0.00 0.00 0.00 00:16:29.952 =================================================================================================================== 00:16:29.952 Total : 13515.80 52.80 0.00 0.00 0.00 0.00 0.00 00:16:29.952 00:16:31.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.327 Nvme0n1 : 6.00 13577.83 53.04 0.00 0.00 0.00 0.00 0.00 00:16:31.327 =================================================================================================================== 00:16:31.327 Total : 13577.83 53.04 0.00 0.00 0.00 0.00 0.00 00:16:31.327 00:16:32.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.258 Nvme0n1 : 7.00 13605.00 53.14 0.00 0.00 0.00 0.00 0.00 00:16:32.258 =================================================================================================================== 00:16:32.258 Total : 13605.00 53.14 0.00 0.00 0.00 0.00 0.00 00:16:32.258 00:16:33.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.190 Nvme0n1 : 8.00 13630.38 53.24 0.00 0.00 0.00 0.00 0.00 00:16:33.190 =================================================================================================================== 00:16:33.190 Total : 13630.38 53.24 0.00 0.00 0.00 0.00 0.00 00:16:33.190 00:16:34.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.123 Nvme0n1 : 9.00 13643.89 53.30 0.00 0.00 0.00 0.00 0.00 00:16:34.123 =================================================================================================================== 00:16:34.123 Total : 13643.89 53.30 0.00 0.00 0.00 0.00 0.00 00:16:34.123 00:16:35.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.055 Nvme0n1 : 10.00 13659.50 53.36 0.00 0.00 0.00 0.00 0.00 00:16:35.055 =================================================================================================================== 00:16:35.055 Total : 13659.50 53.36 0.00 0.00 0.00 0.00 0.00 00:16:35.055 00:16:35.055 00:16:35.055 Latency(us) 00:16:35.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.055 Nvme0n1 : 10.01 13658.97 53.36 0.00 0.00 9362.71 6407.96 16019.91 00:16:35.055 =================================================================================================================== 00:16:35.055 Total : 13658.97 53.36 0.00 0.00 9362.71 6407.96 16019.91 00:16:35.055 0 00:16:35.055 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1084866 00:16:35.055 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1084866 ']' 00:16:35.055 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1084866 00:16:35.055 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1084866 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1084866' 00:16:35.056 killing process with pid 1084866 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1084866 00:16:35.056 Received shutdown signal, test time was about 10.000000 seconds 00:16:35.056 00:16:35.056 Latency(us) 00:16:35.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.056 =================================================================================================================== 00:16:35.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.056 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1084866 00:16:35.313 15:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:35.570 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:35.828 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:35.828 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:36.085 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:36.085 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:36.085 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:36.343 [2024-07-13 15:28:06.972669] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.343 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.343 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.343 15:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.343 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.343 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:36.343 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:36.601 request: 00:16:36.601 { 00:16:36.601 "uuid": "53543b24-e063-4cea-97ab-bc345899ca3c", 00:16:36.601 "method": "bdev_lvol_get_lvstores", 00:16:36.601 "req_id": 1 00:16:36.601 } 00:16:36.601 Got JSON-RPC error response 00:16:36.601 response: 00:16:36.601 { 00:16:36.601 "code": -19, 00:16:36.601 "message": "No such device" 00:16:36.601 } 00:16:36.601 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:36.601 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.601 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.601 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.601 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:36.859 aio_bdev 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8f4007ec-b170-4887-b4c5-0dab8519c0a7 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=8f4007ec-b170-4887-b4c5-0dab8519c0a7 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:36.859 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:37.140 15:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8f4007ec-b170-4887-b4c5-0dab8519c0a7 -t 2000 00:16:37.398 [ 00:16:37.398 { 00:16:37.398 "name": "8f4007ec-b170-4887-b4c5-0dab8519c0a7", 00:16:37.398 "aliases": [ 00:16:37.398 "lvs/lvol" 00:16:37.398 ], 00:16:37.398 "product_name": "Logical Volume", 00:16:37.398 "block_size": 4096, 00:16:37.398 "num_blocks": 38912, 00:16:37.398 "uuid": "8f4007ec-b170-4887-b4c5-0dab8519c0a7", 00:16:37.398 "assigned_rate_limits": { 00:16:37.398 "rw_ios_per_sec": 0, 00:16:37.398 "rw_mbytes_per_sec": 0, 00:16:37.398 "r_mbytes_per_sec": 0, 00:16:37.398 "w_mbytes_per_sec": 0 00:16:37.398 }, 00:16:37.398 "claimed": false, 00:16:37.398 "zoned": false, 00:16:37.398 "supported_io_types": { 00:16:37.398 "read": true, 00:16:37.398 "write": true, 00:16:37.398 "unmap": true, 00:16:37.398 "flush": false, 00:16:37.398 "reset": true, 00:16:37.398 "nvme_admin": false, 00:16:37.398 "nvme_io": false, 00:16:37.398 "nvme_io_md": false, 00:16:37.398 "write_zeroes": true, 00:16:37.398 "zcopy": false, 00:16:37.398 "get_zone_info": false, 00:16:37.398 "zone_management": false, 00:16:37.398 "zone_append": false, 00:16:37.398 "compare": false, 00:16:37.398 "compare_and_write": false, 00:16:37.398 "abort": false, 00:16:37.398 "seek_hole": true, 00:16:37.398 "seek_data": true, 00:16:37.398 "copy": false, 00:16:37.398 "nvme_iov_md": false 00:16:37.398 }, 00:16:37.398 "driver_specific": { 00:16:37.398 "lvol": { 00:16:37.398 "lvol_store_uuid": "53543b24-e063-4cea-97ab-bc345899ca3c", 00:16:37.398 "base_bdev": "aio_bdev", 00:16:37.398 "thin_provision": false, 00:16:37.398 "num_allocated_clusters": 38, 00:16:37.398 "snapshot": false, 00:16:37.398 "clone": false, 00:16:37.398 "esnap_clone": false 00:16:37.398 } 00:16:37.398 } 00:16:37.398 } 00:16:37.398 ] 00:16:37.398 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:37.398 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:37.398 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:37.657 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:37.657 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:37.657 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:37.955 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:37.955 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8f4007ec-b170-4887-b4c5-0dab8519c0a7 00:16:38.213 15:28:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53543b24-e063-4cea-97ab-bc345899ca3c 00:16:38.469 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:38.727 00:16:38.727 real 0m17.221s 00:16:38.727 user 0m16.519s 00:16:38.727 sys 0m1.937s 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.727 ************************************ 00:16:38.727 END TEST lvs_grow_clean 00:16:38.727 ************************************ 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:38.727 ************************************ 00:16:38.727 START TEST lvs_grow_dirty 00:16:38.727 ************************************ 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:38.727 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:38.983 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:38.983 15:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:39.549 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:39.549 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:39.549 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:39.549 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:39.549 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:39.549 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 lvol 150 00:16:39.807 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:39.807 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:39.807 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:40.065 [2024-07-13 15:28:10.801322] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:40.065 [2024-07-13 15:28:10.801427] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:40.065 true 00:16:40.065 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:40.065 15:28:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:40.324 15:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:40.324 15:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:40.582 15:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:40.841 15:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:41.099 [2024-07-13 15:28:11.804399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.099 15:28:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1086972 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1086972 /var/tmp/bdevperf.sock 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1086972 ']' 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.357 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:41.357 [2024-07-13 15:28:12.112522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:41.357 [2024-07-13 15:28:12.112596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086972 ] 00:16:41.615 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.615 [2024-07-13 15:28:12.144686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:41.615 [2024-07-13 15:28:12.172050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.615 [2024-07-13 15:28:12.257861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.615 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.615 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:41.615 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:42.181 Nvme0n1 00:16:42.181 15:28:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:42.437 [ 00:16:42.437 { 00:16:42.437 "name": "Nvme0n1", 00:16:42.437 "aliases": [ 00:16:42.437 "ebda23c5-0697-4dfe-9e5f-356600cb342f" 00:16:42.437 ], 00:16:42.437 "product_name": "NVMe disk", 00:16:42.437 "block_size": 4096, 00:16:42.437 "num_blocks": 38912, 00:16:42.437 "uuid": "ebda23c5-0697-4dfe-9e5f-356600cb342f", 00:16:42.437 "assigned_rate_limits": { 00:16:42.437 "rw_ios_per_sec": 0, 00:16:42.437 "rw_mbytes_per_sec": 0, 00:16:42.437 "r_mbytes_per_sec": 0, 00:16:42.437 "w_mbytes_per_sec": 0 00:16:42.437 }, 00:16:42.437 "claimed": false, 00:16:42.437 "zoned": false, 00:16:42.437 "supported_io_types": { 00:16:42.437 "read": true, 00:16:42.437 "write": true, 00:16:42.437 "unmap": true, 00:16:42.437 "flush": true, 00:16:42.437 "reset": true, 00:16:42.437 "nvme_admin": true, 00:16:42.437 "nvme_io": true, 00:16:42.437 "nvme_io_md": false, 00:16:42.437 "write_zeroes": true, 00:16:42.437 "zcopy": false, 00:16:42.437 "get_zone_info": false, 00:16:42.437 "zone_management": false, 00:16:42.437 "zone_append": false, 00:16:42.437 "compare": true, 00:16:42.437 "compare_and_write": true, 00:16:42.437 "abort": true, 00:16:42.437 "seek_hole": false, 00:16:42.437 "seek_data": false, 00:16:42.437 "copy": true, 00:16:42.437 "nvme_iov_md": false 00:16:42.437 }, 00:16:42.437 "memory_domains": [ 00:16:42.437 { 00:16:42.437 "dma_device_id": "system", 00:16:42.437 "dma_device_type": 1 00:16:42.437 } 00:16:42.437 ], 00:16:42.437 "driver_specific": { 00:16:42.437 "nvme": [ 00:16:42.437 { 00:16:42.437 "trid": { 00:16:42.437 "trtype": "TCP", 00:16:42.437 "adrfam": "IPv4", 00:16:42.437 "traddr": "10.0.0.2", 00:16:42.437 "trsvcid": "4420", 00:16:42.437 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:42.437 }, 00:16:42.437 "ctrlr_data": { 00:16:42.437 "cntlid": 1, 00:16:42.437 "vendor_id": "0x8086", 00:16:42.437 "model_number": "SPDK bdev Controller", 00:16:42.437 "serial_number": "SPDK0", 00:16:42.438 "firmware_revision": "24.09", 00:16:42.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.438 "oacs": { 00:16:42.438 "security": 0, 00:16:42.438 "format": 0, 00:16:42.438 "firmware": 0, 00:16:42.438 "ns_manage": 0 00:16:42.438 }, 00:16:42.438 "multi_ctrlr": true, 00:16:42.438 "ana_reporting": false 00:16:42.438 }, 00:16:42.438 "vs": { 00:16:42.438 "nvme_version": "1.3" 00:16:42.438 }, 00:16:42.438 "ns_data": { 00:16:42.438 "id": 1, 00:16:42.438 "can_share": true 00:16:42.438 } 00:16:42.438 } 00:16:42.438 ], 00:16:42.438 "mp_policy": "active_passive" 00:16:42.438 } 00:16:42.438 } 00:16:42.438 ] 00:16:42.438 15:28:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1087106 00:16:42.438 15:28:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:42.438 15:28:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:42.438 Running I/O for 10 seconds... 00:16:43.812 Latency(us) 00:16:43.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.812 Nvme0n1 : 1.00 13832.00 54.03 0.00 0.00 0.00 0.00 0.00 00:16:43.812 =================================================================================================================== 00:16:43.812 Total : 13832.00 54.03 0.00 0.00 0.00 0.00 0.00 00:16:43.812 00:16:44.379 15:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:44.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.637 Nvme0n1 : 2.00 14019.50 54.76 0.00 0.00 0.00 0.00 0.00 00:16:44.637 =================================================================================================================== 00:16:44.637 Total : 14019.50 54.76 0.00 0.00 0.00 0.00 0.00 00:16:44.637 00:16:44.637 true 00:16:44.637 15:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:44.637 15:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:44.895 15:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:44.895 15:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:44.895 15:28:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1087106 00:16:45.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.462 Nvme0n1 : 3.00 14195.00 55.45 0.00 0.00 0.00 0.00 0.00 00:16:45.462 =================================================================================================================== 00:16:45.462 Total : 14195.00 55.45 0.00 0.00 0.00 0.00 0.00 00:16:45.462 00:16:46.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.837 Nvme0n1 : 4.00 14257.75 55.69 0.00 0.00 0.00 0.00 0.00 00:16:46.837 =================================================================================================================== 00:16:46.837 Total : 14257.75 55.69 0.00 0.00 0.00 0.00 0.00 00:16:46.837 00:16:47.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.771 Nvme0n1 : 5.00 14337.40 56.01 0.00 0.00 0.00 0.00 0.00 00:16:47.771 =================================================================================================================== 00:16:47.771 Total : 14337.40 56.01 0.00 0.00 0.00 0.00 0.00 00:16:47.771 00:16:48.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.707 Nvme0n1 : 6.00 14401.17 56.25 0.00 0.00 0.00 0.00 0.00 00:16:48.707 =================================================================================================================== 00:16:48.707 Total : 14401.17 56.25 0.00 0.00 0.00 0.00 0.00 00:16:48.707 00:16:49.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.642 Nvme0n1 : 7.00 14401.00 56.25 0.00 0.00 0.00 0.00 0.00 00:16:49.642 =================================================================================================================== 00:16:49.642 Total : 14401.00 56.25 0.00 0.00 0.00 0.00 0.00 00:16:49.642 00:16:50.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.576 Nvme0n1 : 8.00 14472.88 56.53 0.00 0.00 0.00 0.00 0.00 00:16:50.576 =================================================================================================================== 00:16:50.576 Total : 14472.88 56.53 0.00 0.00 0.00 0.00 0.00 00:16:50.576 00:16:51.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.510 Nvme0n1 : 9.00 14486.11 56.59 0.00 0.00 0.00 0.00 0.00 00:16:51.510 =================================================================================================================== 00:16:51.510 Total : 14486.11 56.59 0.00 0.00 0.00 0.00 0.00 00:16:51.510 00:16:52.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.457 Nvme0n1 : 10.00 14530.30 56.76 0.00 0.00 0.00 0.00 0.00 00:16:52.457 =================================================================================================================== 00:16:52.457 Total : 14530.30 56.76 0.00 0.00 0.00 0.00 0.00 00:16:52.457 00:16:52.457 00:16:52.457 Latency(us) 00:16:52.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.457 Nvme0n1 : 10.01 14534.69 56.78 0.00 0.00 8799.29 5121.52 19126.80 00:16:52.457 =================================================================================================================== 00:16:52.457 Total : 14534.69 56.78 0.00 0.00 8799.29 5121.52 19126.80 00:16:52.457 0 00:16:52.457 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1086972 00:16:52.457 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1086972 ']' 00:16:52.457 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1086972 00:16:52.457 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1086972 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1086972' 00:16:52.717 killing process with pid 1086972 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1086972 00:16:52.717 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.717 00:16:52.717 Latency(us) 00:16:52.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.717 =================================================================================================================== 00:16:52.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1086972 00:16:52.717 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.975 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:53.233 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:53.233 15:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:53.490 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:53.490 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:53.490 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1084485 00:16:53.490 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1084485 00:16:53.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1084485 Killed "${NVMF_APP[@]}" "$@" 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1088428 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1088428 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1088428 ']' 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.748 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:53.748 [2024-07-13 15:28:24.319835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:16:53.748 [2024-07-13 15:28:24.319931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.748 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.748 [2024-07-13 15:28:24.358379] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.748 [2024-07-13 15:28:24.388771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.748 [2024-07-13 15:28:24.479225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.748 [2024-07-13 15:28:24.479291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.748 [2024-07-13 15:28:24.479307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.748 [2024-07-13 15:28:24.479321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.748 [2024-07-13 15:28:24.479333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.748 [2024-07-13 15:28:24.479368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.006 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.264 [2024-07-13 15:28:24.899302] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:54.264 [2024-07-13 15:28:24.899448] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:54.264 [2024-07-13 15:28:24.899504] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.264 15:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.521 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebda23c5-0697-4dfe-9e5f-356600cb342f -t 2000 00:16:54.779 [ 00:16:54.779 { 00:16:54.779 "name": "ebda23c5-0697-4dfe-9e5f-356600cb342f", 00:16:54.779 "aliases": [ 00:16:54.779 "lvs/lvol" 00:16:54.779 ], 00:16:54.779 "product_name": "Logical Volume", 00:16:54.779 "block_size": 4096, 00:16:54.779 "num_blocks": 38912, 00:16:54.779 "uuid": "ebda23c5-0697-4dfe-9e5f-356600cb342f", 00:16:54.779 "assigned_rate_limits": { 00:16:54.779 "rw_ios_per_sec": 0, 00:16:54.779 "rw_mbytes_per_sec": 0, 00:16:54.779 "r_mbytes_per_sec": 0, 00:16:54.779 "w_mbytes_per_sec": 0 00:16:54.779 }, 00:16:54.779 "claimed": false, 00:16:54.779 "zoned": false, 00:16:54.779 "supported_io_types": { 00:16:54.779 "read": true, 00:16:54.779 "write": true, 00:16:54.779 "unmap": true, 00:16:54.779 "flush": false, 00:16:54.779 "reset": true, 00:16:54.779 "nvme_admin": false, 00:16:54.779 "nvme_io": false, 00:16:54.779 "nvme_io_md": false, 00:16:54.779 "write_zeroes": true, 00:16:54.779 "zcopy": false, 00:16:54.779 "get_zone_info": false, 00:16:54.779 "zone_management": false, 00:16:54.779 "zone_append": false, 00:16:54.779 "compare": false, 00:16:54.779 "compare_and_write": false, 00:16:54.779 "abort": false, 00:16:54.779 "seek_hole": true, 00:16:54.779 "seek_data": true, 00:16:54.779 "copy": false, 00:16:54.779 "nvme_iov_md": false 00:16:54.779 }, 00:16:54.779 "driver_specific": { 00:16:54.779 "lvol": { 00:16:54.779 "lvol_store_uuid": "8482b9d1-2fe9-4d4f-a993-da6c1c945414", 00:16:54.779 "base_bdev": "aio_bdev", 00:16:54.779 "thin_provision": false, 00:16:54.779 "num_allocated_clusters": 38, 00:16:54.779 "snapshot": false, 00:16:54.779 "clone": false, 00:16:54.779 "esnap_clone": false 00:16:54.779 } 00:16:54.779 } 00:16:54.779 } 00:16:54.779 ] 00:16:54.779 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:54.779 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:54.779 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:55.036 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:55.036 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:55.036 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:55.294 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:55.294 15:28:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:55.553 [2024-07-13 15:28:26.228496] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:55.553 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:55.811 request: 00:16:55.811 { 00:16:55.811 "uuid": "8482b9d1-2fe9-4d4f-a993-da6c1c945414", 00:16:55.811 "method": "bdev_lvol_get_lvstores", 00:16:55.811 "req_id": 1 00:16:55.811 } 00:16:55.811 Got JSON-RPC error response 00:16:55.811 response: 00:16:55.811 { 00:16:55.811 "code": -19, 00:16:55.811 "message": "No such device" 00:16:55.811 } 00:16:55.811 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:55.811 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:55.811 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:55.811 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:55.811 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:56.070 aio_bdev 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.070 15:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:56.327 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ebda23c5-0697-4dfe-9e5f-356600cb342f -t 2000 00:16:56.585 [ 00:16:56.585 { 00:16:56.585 "name": "ebda23c5-0697-4dfe-9e5f-356600cb342f", 00:16:56.585 "aliases": [ 00:16:56.585 "lvs/lvol" 00:16:56.585 ], 00:16:56.585 "product_name": "Logical Volume", 00:16:56.585 "block_size": 4096, 00:16:56.585 "num_blocks": 38912, 00:16:56.585 "uuid": "ebda23c5-0697-4dfe-9e5f-356600cb342f", 00:16:56.585 "assigned_rate_limits": { 00:16:56.585 "rw_ios_per_sec": 0, 00:16:56.585 "rw_mbytes_per_sec": 0, 00:16:56.585 "r_mbytes_per_sec": 0, 00:16:56.585 "w_mbytes_per_sec": 0 00:16:56.585 }, 00:16:56.585 "claimed": false, 00:16:56.585 "zoned": false, 00:16:56.585 "supported_io_types": { 00:16:56.585 "read": true, 00:16:56.585 "write": true, 00:16:56.585 "unmap": true, 00:16:56.585 "flush": false, 00:16:56.585 "reset": true, 00:16:56.585 "nvme_admin": false, 00:16:56.585 "nvme_io": false, 00:16:56.585 "nvme_io_md": false, 00:16:56.585 "write_zeroes": true, 00:16:56.585 "zcopy": false, 00:16:56.585 "get_zone_info": false, 00:16:56.585 "zone_management": false, 00:16:56.585 "zone_append": false, 00:16:56.585 "compare": false, 00:16:56.585 "compare_and_write": false, 00:16:56.585 "abort": false, 00:16:56.585 "seek_hole": true, 00:16:56.585 "seek_data": true, 00:16:56.585 "copy": false, 00:16:56.585 "nvme_iov_md": false 00:16:56.585 }, 00:16:56.585 "driver_specific": { 00:16:56.585 "lvol": { 00:16:56.585 "lvol_store_uuid": "8482b9d1-2fe9-4d4f-a993-da6c1c945414", 00:16:56.585 "base_bdev": "aio_bdev", 00:16:56.585 "thin_provision": false, 00:16:56.585 "num_allocated_clusters": 38, 00:16:56.585 "snapshot": false, 00:16:56.585 "clone": false, 00:16:56.585 "esnap_clone": false 00:16:56.585 } 00:16:56.585 } 00:16:56.585 } 00:16:56.585 ] 00:16:56.585 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:56.585 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:56.585 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:56.842 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:56.842 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:56.842 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:57.099 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:57.100 15:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ebda23c5-0697-4dfe-9e5f-356600cb342f 00:16:57.357 15:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8482b9d1-2fe9-4d4f-a993-da6c1c945414 00:16:57.614 15:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:57.872 00:16:57.872 real 0m19.187s 00:16:57.872 user 0m47.930s 00:16:57.872 sys 0m4.937s 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.872 ************************************ 00:16:57.872 END TEST lvs_grow_dirty 00:16:57.872 ************************************ 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:57.872 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:58.154 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:58.154 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:58.155 nvmf_trace.0 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.155 rmmod nvme_tcp 00:16:58.155 rmmod nvme_fabrics 00:16:58.155 rmmod nvme_keyring 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1088428 ']' 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1088428 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1088428 ']' 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1088428 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1088428 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1088428' 00:16:58.155 killing process with pid 1088428 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1088428 00:16:58.155 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1088428 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.414 15:28:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.316 15:28:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.316 00:17:00.316 real 0m41.575s 00:17:00.316 user 1m10.147s 00:17:00.316 sys 0m8.673s 00:17:00.316 15:28:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.316 15:28:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:00.316 ************************************ 00:17:00.316 END TEST nvmf_lvs_grow 00:17:00.316 ************************************ 00:17:00.316 15:28:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.316 15:28:31 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:00.316 15:28:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.316 15:28:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.316 15:28:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.316 ************************************ 00:17:00.316 START TEST nvmf_bdev_io_wait 00:17:00.316 ************************************ 00:17:00.316 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:00.575 * Looking for test storage... 00:17:00.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.575 15:28:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:02.475 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.476 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.476 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.476 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.476 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:17:02.476 00:17:02.476 --- 10.0.0.2 ping statistics --- 00:17:02.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.476 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:17:02.476 00:17:02.476 --- 10.0.0.1 ping statistics --- 00:17:02.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.476 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1090835 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1090835 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1090835 ']' 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.476 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.734 [2024-07-13 15:28:33.276619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:02.734 [2024-07-13 15:28:33.276694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.734 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.734 [2024-07-13 15:28:33.316692] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:02.734 [2024-07-13 15:28:33.345699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.734 [2024-07-13 15:28:33.435716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.734 [2024-07-13 15:28:33.435770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.734 [2024-07-13 15:28:33.435799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.734 [2024-07-13 15:28:33.435810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.734 [2024-07-13 15:28:33.435820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.734 [2024-07-13 15:28:33.435901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.734 [2024-07-13 15:28:33.435963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.734 [2024-07-13 15:28:33.436026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.734 [2024-07-13 15:28:33.436028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.734 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.734 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:02.734 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.734 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.734 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.992 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 [2024-07-13 15:28:33.594979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 Malloc0 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.993 [2024-07-13 15:28:33.663470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1090990 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1090992 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.993 { 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme$subsystem", 00:17:02.993 "trtype": "$TEST_TRANSPORT", 00:17:02.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "$NVMF_PORT", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.993 "hdgst": ${hdgst:-false}, 00:17:02.993 "ddgst": ${ddgst:-false} 00:17:02.993 }, 00:17:02.993 "method": "bdev_nvme_attach_controller" 00:17:02.993 } 00:17:02.993 EOF 00:17:02.993 )") 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1090994 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.993 { 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme$subsystem", 00:17:02.993 "trtype": "$TEST_TRANSPORT", 00:17:02.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "$NVMF_PORT", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.993 "hdgst": ${hdgst:-false}, 00:17:02.993 "ddgst": ${ddgst:-false} 00:17:02.993 }, 00:17:02.993 "method": "bdev_nvme_attach_controller" 00:17:02.993 } 00:17:02.993 EOF 00:17:02.993 )") 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1090997 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.993 { 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme$subsystem", 00:17:02.993 "trtype": "$TEST_TRANSPORT", 00:17:02.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "$NVMF_PORT", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.993 "hdgst": ${hdgst:-false}, 00:17:02.993 "ddgst": ${ddgst:-false} 00:17:02.993 }, 00:17:02.993 "method": "bdev_nvme_attach_controller" 00:17:02.993 } 00:17:02.993 EOF 00:17:02.993 )") 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.993 { 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme$subsystem", 00:17:02.993 "trtype": "$TEST_TRANSPORT", 00:17:02.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "$NVMF_PORT", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.993 "hdgst": ${hdgst:-false}, 00:17:02.993 "ddgst": ${ddgst:-false} 00:17:02.993 }, 00:17:02.993 "method": "bdev_nvme_attach_controller" 00:17:02.993 } 00:17:02.993 EOF 00:17:02.993 )") 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1090990 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme1", 00:17:02.993 "trtype": "tcp", 00:17:02.993 "traddr": "10.0.0.2", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "4420", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.993 "hdgst": false, 00:17:02.993 "ddgst": false 00:17:02.993 }, 00:17:02.993 "method": "bdev_nvme_attach_controller" 00:17:02.993 }' 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme1", 00:17:02.993 "trtype": "tcp", 00:17:02.993 "traddr": "10.0.0.2", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "4420", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.993 "hdgst": false, 00:17:02.993 "ddgst": false 00:17:02.993 }, 00:17:02.993 "method": "bdev_nvme_attach_controller" 00:17:02.993 }' 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:02.993 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.993 "params": { 00:17:02.993 "name": "Nvme1", 00:17:02.993 "trtype": "tcp", 00:17:02.993 "traddr": "10.0.0.2", 00:17:02.993 "adrfam": "ipv4", 00:17:02.993 "trsvcid": "4420", 00:17:02.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.994 "hdgst": false, 00:17:02.994 "ddgst": false 00:17:02.994 }, 00:17:02.994 "method": "bdev_nvme_attach_controller" 00:17:02.994 }' 00:17:02.994 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:02.994 15:28:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.994 "params": { 00:17:02.994 "name": "Nvme1", 00:17:02.994 "trtype": "tcp", 00:17:02.994 "traddr": "10.0.0.2", 00:17:02.994 "adrfam": "ipv4", 00:17:02.994 "trsvcid": "4420", 00:17:02.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.994 "hdgst": false, 00:17:02.994 "ddgst": false 00:17:02.994 }, 00:17:02.994 "method": "bdev_nvme_attach_controller" 00:17:02.994 }' 00:17:02.994 [2024-07-13 15:28:33.710220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:02.994 [2024-07-13 15:28:33.710220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:02.994 [2024-07-13 15:28:33.710220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:02.994 [2024-07-13 15:28:33.710328] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 15:28:33.710327] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 15:28:33.710329] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:02.994 --proc-type=auto ] 00:17:02.994 --proc-type=auto ] 00:17:02.994 [2024-07-13 15:28:33.711756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:02.994 [2024-07-13 15:28:33.711825] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:03.251 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.251 [2024-07-13 15:28:33.855663] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.251 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.251 [2024-07-13 15:28:33.883801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.251 [2024-07-13 15:28:33.952701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.251 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.251 [2024-07-13 15:28:33.958504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.251 [2024-07-13 15:28:33.982809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.509 [2024-07-13 15:28:34.051237] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.509 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.509 [2024-07-13 15:28:34.057984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.509 [2024-07-13 15:28:34.081515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.509 [2024-07-13 15:28:34.127075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.509 [2024-07-13 15:28:34.156817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.509 [2024-07-13 15:28:34.160744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:03.509 [2024-07-13 15:28:34.227514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:03.766 Running I/O for 1 seconds... 00:17:03.766 Running I/O for 1 seconds... 00:17:03.766 Running I/O for 1 seconds... 00:17:03.766 Running I/O for 1 seconds... 00:17:04.697 00:17:04.697 Latency(us) 00:17:04.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.698 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:04.698 Nvme1n1 : 1.02 6376.55 24.91 0.00 0.00 19868.23 9272.13 28156.21 00:17:04.698 =================================================================================================================== 00:17:04.698 Total : 6376.55 24.91 0.00 0.00 19868.23 9272.13 28156.21 00:17:04.698 00:17:04.698 Latency(us) 00:17:04.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.698 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:04.698 Nvme1n1 : 1.01 8511.44 33.25 0.00 0.00 14967.24 8543.95 26796.94 00:17:04.698 =================================================================================================================== 00:17:04.698 Total : 8511.44 33.25 0.00 0.00 14967.24 8543.95 26796.94 00:17:04.980 00:17:04.980 Latency(us) 00:17:04.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.980 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:04.980 Nvme1n1 : 1.01 6510.60 25.43 0.00 0.00 19598.08 5364.24 42525.58 00:17:04.980 =================================================================================================================== 00:17:04.980 Total : 6510.60 25.43 0.00 0.00 19598.08 5364.24 42525.58 00:17:04.980 00:17:04.980 Latency(us) 00:17:04.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.980 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:04.980 Nvme1n1 : 1.00 199034.93 777.48 0.00 0.00 640.37 279.13 849.54 00:17:04.980 =================================================================================================================== 00:17:04.980 Total : 199034.93 777.48 0.00 0.00 640.37 279.13 849.54 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1090992 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1090994 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1090997 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.238 rmmod nvme_tcp 00:17:05.238 rmmod nvme_fabrics 00:17:05.238 rmmod nvme_keyring 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1090835 ']' 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1090835 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1090835 ']' 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1090835 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1090835 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.238 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1090835' 00:17:05.239 killing process with pid 1090835 00:17:05.239 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1090835 00:17:05.239 15:28:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1090835 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.498 15:28:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.454 15:28:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.454 00:17:07.454 real 0m7.138s 00:17:07.454 user 0m16.145s 00:17:07.454 sys 0m3.625s 00:17:07.455 15:28:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.455 15:28:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:07.455 ************************************ 00:17:07.455 END TEST nvmf_bdev_io_wait 00:17:07.455 ************************************ 00:17:07.713 15:28:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:07.713 15:28:38 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:07.713 15:28:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:07.713 15:28:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.713 15:28:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.713 ************************************ 00:17:07.713 START TEST nvmf_queue_depth 00:17:07.713 ************************************ 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:07.713 * Looking for test storage... 00:17:07.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.713 15:28:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.714 15:28:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.611 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:17:09.612 00:17:09.612 --- 10.0.0.2 ping statistics --- 00:17:09.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.612 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:17:09.612 00:17:09.612 --- 10.0.0.1 ping statistics --- 00:17:09.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.612 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.612 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1093207 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1093207 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1093207 ']' 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.870 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.871 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:09.871 [2024-07-13 15:28:40.444901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:09.871 [2024-07-13 15:28:40.445008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.871 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.871 [2024-07-13 15:28:40.484529] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:09.871 [2024-07-13 15:28:40.511963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.871 [2024-07-13 15:28:40.598082] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.871 [2024-07-13 15:28:40.598160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.871 [2024-07-13 15:28:40.598174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.871 [2024-07-13 15:28:40.598185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.871 [2024-07-13 15:28:40.598195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.871 [2024-07-13 15:28:40.598243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.129 [2024-07-13 15:28:40.744889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.129 Malloc0 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.129 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.130 [2024-07-13 15:28:40.804502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1093231 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1093231 /var/tmp/bdevperf.sock 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1093231 ']' 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.130 15:28:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.130 [2024-07-13 15:28:40.849403] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:10.130 [2024-07-13 15:28:40.849478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093231 ] 00:17:10.130 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.130 [2024-07-13 15:28:40.882130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:10.388 [2024-07-13 15:28:40.912912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.388 [2024-07-13 15:28:41.003996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.388 15:28:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.388 15:28:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:10.388 15:28:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:10.388 15:28:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.388 15:28:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:10.646 NVMe0n1 00:17:10.646 15:28:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.646 15:28:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.903 Running I/O for 10 seconds... 00:17:20.880 00:17:20.880 Latency(us) 00:17:20.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.880 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:20.880 Verification LBA range: start 0x0 length 0x4000 00:17:20.880 NVMe0n1 : 10.06 8550.74 33.40 0.00 0.00 119280.61 12815.93 75342.13 00:17:20.880 =================================================================================================================== 00:17:20.880 Total : 8550.74 33.40 0.00 0.00 119280.61 12815.93 75342.13 00:17:20.880 0 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1093231 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1093231 ']' 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1093231 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1093231 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1093231' 00:17:20.880 killing process with pid 1093231 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1093231 00:17:20.880 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.880 00:17:20.880 Latency(us) 00:17:20.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.880 =================================================================================================================== 00:17:20.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.880 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1093231 00:17:21.137 15:28:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:21.137 15:28:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:21.137 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.138 rmmod nvme_tcp 00:17:21.138 rmmod nvme_fabrics 00:17:21.138 rmmod nvme_keyring 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1093207 ']' 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1093207 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1093207 ']' 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1093207 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1093207 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1093207' 00:17:21.138 killing process with pid 1093207 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1093207 00:17:21.138 15:28:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1093207 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.704 15:28:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.626 15:28:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.626 00:17:23.626 real 0m15.947s 00:17:23.626 user 0m22.606s 00:17:23.626 sys 0m2.976s 00:17:23.626 15:28:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:23.626 15:28:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.626 ************************************ 00:17:23.626 END TEST nvmf_queue_depth 00:17:23.626 ************************************ 00:17:23.626 15:28:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:23.627 15:28:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:23.627 15:28:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:23.627 15:28:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.627 15:28:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:23.627 ************************************ 00:17:23.627 START TEST nvmf_target_multipath 00:17:23.627 ************************************ 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:23.627 * Looking for test storage... 00:17:23.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.627 15:28:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.159 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.159 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.159 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:26.160 00:17:26.160 --- 10.0.0.2 ping statistics --- 00:17:26.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.160 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:17:26.160 00:17:26.160 --- 10.0.0.1 ping statistics --- 00:17:26.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.160 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:26.160 only one NIC for nvmf test 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.160 rmmod nvme_tcp 00:17:26.160 rmmod nvme_fabrics 00:17:26.160 rmmod nvme_keyring 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.160 15:28:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.063 00:17:28.063 real 0m4.350s 00:17:28.063 user 0m0.819s 00:17:28.063 sys 0m1.532s 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.063 15:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:28.063 ************************************ 00:17:28.063 END TEST nvmf_target_multipath 00:17:28.063 ************************************ 00:17:28.063 15:28:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.063 15:28:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:28.063 15:28:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.063 15:28:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.063 15:28:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.063 ************************************ 00:17:28.063 START TEST nvmf_zcopy 00:17:28.063 ************************************ 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:28.063 * Looking for test storage... 00:17:28.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.063 15:28:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.628 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.629 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:30.629 00:17:30.629 --- 10.0.0.2 ping statistics --- 00:17:30.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.629 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:17:30.629 00:17:30.629 --- 10.0.0.1 ping statistics --- 00:17:30.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.629 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1098397 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1098397 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1098397 ']' 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.629 15:29:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.629 [2024-07-13 15:29:00.996192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:30.629 [2024-07-13 15:29:00.996268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.629 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.629 [2024-07-13 15:29:01.032790] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:30.629 [2024-07-13 15:29:01.059547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.629 [2024-07-13 15:29:01.142613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.629 [2024-07-13 15:29:01.142666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.629 [2024-07-13 15:29:01.142695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.629 [2024-07-13 15:29:01.142706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.629 [2024-07-13 15:29:01.142715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.629 [2024-07-13 15:29:01.142746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.629 [2024-07-13 15:29:01.278779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.629 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.630 [2024-07-13 15:29:01.294978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.630 malloc0 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:30.630 { 00:17:30.630 "params": { 00:17:30.630 "name": "Nvme$subsystem", 00:17:30.630 "trtype": "$TEST_TRANSPORT", 00:17:30.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.630 "adrfam": "ipv4", 00:17:30.630 "trsvcid": "$NVMF_PORT", 00:17:30.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.630 "hdgst": ${hdgst:-false}, 00:17:30.630 "ddgst": ${ddgst:-false} 00:17:30.630 }, 00:17:30.630 "method": "bdev_nvme_attach_controller" 00:17:30.630 } 00:17:30.630 EOF 00:17:30.630 )") 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:30.630 15:29:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:30.630 "params": { 00:17:30.630 "name": "Nvme1", 00:17:30.630 "trtype": "tcp", 00:17:30.630 "traddr": "10.0.0.2", 00:17:30.630 "adrfam": "ipv4", 00:17:30.630 "trsvcid": "4420", 00:17:30.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.630 "hdgst": false, 00:17:30.630 "ddgst": false 00:17:30.630 }, 00:17:30.630 "method": "bdev_nvme_attach_controller" 00:17:30.630 }' 00:17:30.630 [2024-07-13 15:29:01.372022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:30.630 [2024-07-13 15:29:01.372101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098422 ] 00:17:30.888 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.888 [2024-07-13 15:29:01.404262] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:30.888 [2024-07-13 15:29:01.432383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.888 [2024-07-13 15:29:01.522238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.146 Running I/O for 10 seconds... 00:17:41.114 00:17:41.114 Latency(us) 00:17:41.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.114 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:41.114 Verification LBA range: start 0x0 length 0x1000 00:17:41.114 Nvme1n1 : 10.01 5851.58 45.72 0.00 0.00 21813.67 749.42 30874.74 00:17:41.114 =================================================================================================================== 00:17:41.114 Total : 5851.58 45.72 0.00 0.00 21813.67 749.42 30874.74 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1099613 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:41.374 { 00:17:41.374 "params": { 00:17:41.374 "name": "Nvme$subsystem", 00:17:41.374 "trtype": "$TEST_TRANSPORT", 00:17:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:41.374 "adrfam": "ipv4", 00:17:41.374 "trsvcid": "$NVMF_PORT", 00:17:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:41.374 "hdgst": ${hdgst:-false}, 00:17:41.374 "ddgst": ${ddgst:-false} 00:17:41.374 }, 00:17:41.374 "method": "bdev_nvme_attach_controller" 00:17:41.374 } 00:17:41.374 EOF 00:17:41.374 )") 00:17:41.374 [2024-07-13 15:29:11.973530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:11.973581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:41.374 15:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:41.374 "params": { 00:17:41.374 "name": "Nvme1", 00:17:41.374 "trtype": "tcp", 00:17:41.374 "traddr": "10.0.0.2", 00:17:41.374 "adrfam": "ipv4", 00:17:41.374 "trsvcid": "4420", 00:17:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.374 "hdgst": false, 00:17:41.374 "ddgst": false 00:17:41.374 }, 00:17:41.374 "method": "bdev_nvme_attach_controller" 00:17:41.374 }' 00:17:41.374 [2024-07-13 15:29:11.981477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:11.981505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:11.989519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:11.989545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:11.997520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:11.997546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.005542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.005567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.013567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.013593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.014940] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:41.374 [2024-07-13 15:29:12.015012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099613 ] 00:17:41.374 [2024-07-13 15:29:12.021588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.021614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.029609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.029634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.037631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.037657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.045654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.045680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.374 [2024-07-13 15:29:12.049308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:41.374 [2024-07-13 15:29:12.053678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.053704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.061698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.061723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.069721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.069745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.077740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.077765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.081110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.374 [2024-07-13 15:29:12.085775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.085802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.093820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.093861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.101809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.101835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.109839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.109884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.117849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.117884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.125879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.125918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.374 [2024-07-13 15:29:12.133944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.374 [2024-07-13 15:29:12.133979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.141972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.142007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.149969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.149992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.157972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.157994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.165993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.166015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.174015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.174042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.175309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.634 [2024-07-13 15:29:12.182038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.182059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.190069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.190094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.198113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.198166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.206133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.206191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.214179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.214221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.222203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.222247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.230225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.230270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.238255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.238300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.246237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.246262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.254290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.254330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.262312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.262353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.270336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.270382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.278324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.278349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.286354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.286380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.294383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.294412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.302404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.302431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.310426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.310453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.318448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.318486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.326469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.326494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.334494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.334519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.342517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.342542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.350538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.350563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.358568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.358594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.366611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.366640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.374615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.374642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.382633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.382658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.390672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.390702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.634 [2024-07-13 15:29:12.398687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.634 [2024-07-13 15:29:12.398715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 Running I/O for 5 seconds... 00:17:41.895 [2024-07-13 15:29:12.406754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.406782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.420380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.420413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.432258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.432290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.442912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.442940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.456476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.456508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.467020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.467049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.479051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.479079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.490841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.490892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.502298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.502337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.515733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.515766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.525857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.525907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.537761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.537791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.548903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.548931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.561811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.561839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.571614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.571644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.583222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.583250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.594734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.594764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.605254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.605282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.617043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.617072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.627855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.627904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.638481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.638510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.649172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.649213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.895 [2024-07-13 15:29:12.659700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.895 [2024-07-13 15:29:12.659727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.672326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.672353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.682035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.682063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.693125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.693153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.703893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.703919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.714158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.714193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.724960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.724988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.735487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.735515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.746158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.746200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.756985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.757013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.767445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.767472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.778394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.778421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.790812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.790838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.800604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.800631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.811920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.811948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.822637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.822664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.835394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.835421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.845245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.845272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.856583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.856609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.867134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.867176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.154 [2024-07-13 15:29:12.877883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.154 [2024-07-13 15:29:12.877911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.155 [2024-07-13 15:29:12.890455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.155 [2024-07-13 15:29:12.890482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.155 [2024-07-13 15:29:12.899982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.155 [2024-07-13 15:29:12.900009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.155 [2024-07-13 15:29:12.910838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.155 [2024-07-13 15:29:12.910873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.921610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.921645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.932415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.932442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.942324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.942351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.953686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.953714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.964235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.964262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.974925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.974954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.985425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.985452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:12.997883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:12.997911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:13.007585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:13.007613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:13.018659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:13.018687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:13.029949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:13.029977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:13.040998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.416 [2024-07-13 15:29:13.041026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.416 [2024-07-13 15:29:13.051369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.051412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.062137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.062165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.072205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.072231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.082761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.082788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.093047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.093074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.103850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.103900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.116460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.116487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.126345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.126372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.137463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.137489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.147957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.147985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.157853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.157906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.168739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.168767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.417 [2024-07-13 15:29:13.179517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.417 [2024-07-13 15:29:13.179547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.190198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.190240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.200801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.200829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.211038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.211065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.222307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.222335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.233222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.233249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.243786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.243813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.254703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.254730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.265520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.265547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.276180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.276207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.286897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.286925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.297671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.297698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.308526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.308553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.319354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.319381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.329985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.330013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.341069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.341097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.352078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.352105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.362803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.362830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.373638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.373664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.384435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.384462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.396592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.396618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.406214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.406241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.417687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.674 [2024-07-13 15:29:13.417715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.674 [2024-07-13 15:29:13.428617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.675 [2024-07-13 15:29:13.428643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.675 [2024-07-13 15:29:13.439027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.675 [2024-07-13 15:29:13.439054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.451932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.451960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.463892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.463920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.472850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.472899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.484101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.484129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.494148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.494176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.504995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.505023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.515444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.515471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.525947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.525975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.538649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.538676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.548384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.548412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.559496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.559522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.570242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.570269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.582935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.582963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.592291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.592318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.603655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.603682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.614295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.614321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.625021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.625048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.635474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.635500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.646145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.646172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.656630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.656657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.667199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.667226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.678070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.678100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.933 [2024-07-13 15:29:13.688845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.933 [2024-07-13 15:29:13.688896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.699370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.699402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.709908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.709936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.720926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.720954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.731713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.731747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.742390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.742418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.753192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.753219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.763825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.763852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.774900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.774937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.785628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.785655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.796374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.796401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.806562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.806588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.817153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.817181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.829724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.829750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.839443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.839469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.850239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.850265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.859595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.859623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.870324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.870351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.880686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.880713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.891186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.891213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.901394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.901421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.912654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.912681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.923289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.923316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.934000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.934035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.944579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.944606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.190 [2024-07-13 15:29:13.954811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.190 [2024-07-13 15:29:13.954839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:13.965364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:13.965391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:13.976088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:13.976115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:13.986857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:13.986909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:13.997370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:13.997397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.007682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.007709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.019721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.019748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.029142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.029171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.040504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.040530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.051309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.051336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.062224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.062252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.073451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.073478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.084322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.084363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.095569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.095596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.106204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.106231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.119138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.119179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.129316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.129342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.140490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.140525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.151015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.151043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.161755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.161782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.172875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.172902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.184373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.184400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.195408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.195436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.450 [2024-07-13 15:29:14.205954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.450 [2024-07-13 15:29:14.205982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.216460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.216487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.227265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.227292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.238367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.238394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.248112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.248140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.259042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.259085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.269901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.269929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.280849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.280885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.291474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.291501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.302572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.302600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.313207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.313236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.323856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.323893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.334729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.334757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.345482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.345516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.355717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.355744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.366217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.366244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.378610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.378637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.388296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.388323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.399492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.399519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.723 [2024-07-13 15:29:14.410506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.723 [2024-07-13 15:29:14.410533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.421452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.421479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.432451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.432477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.443749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.443775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.456409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.456435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.465874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.465905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.477480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.477507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.724 [2024-07-13 15:29:14.488393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.724 [2024-07-13 15:29:14.488419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.499307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.499335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.509646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.509688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.520763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.520790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.531835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.531873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.542450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.542477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.553208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.553242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.564127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.564170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.574984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.575012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.585723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.585750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.596498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.596524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.606967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.606994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.617370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.617397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.628197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.628224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.639221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.639247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.649876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.649904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.660678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.660704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.671891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.671918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.983 [2024-07-13 15:29:14.682642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.983 [2024-07-13 15:29:14.682669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-13 15:29:14.692929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-13 15:29:14.692958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-13 15:29:14.704242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-13 15:29:14.704269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-13 15:29:14.714657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-13 15:29:14.714683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-13 15:29:14.725385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-13 15:29:14.725412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-13 15:29:14.736095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-13 15:29:14.736123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.984 [2024-07-13 15:29:14.746773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.984 [2024-07-13 15:29:14.746801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.757384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.757412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.767649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.767675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.778133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.778175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.788586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.788613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.799700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.799726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.812459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.812485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.821571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.821597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.834885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.834913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.845429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.845456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.856068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.856095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.866777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.866804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.877563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.877590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.890255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.890282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.900006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.900034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.911803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.911831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.922491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.922518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.933208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.933236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.944062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.944090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.954940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.954967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.965490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.965517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.976206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.976233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.986910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.986938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.242 [2024-07-13 15:29:14.997499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.242 [2024-07-13 15:29:14.997526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.008332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.008359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.019377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.019403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.030515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.030541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.041237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.041264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.051896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.051924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.062272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.062299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.073945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.073973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.083215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.083242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.094342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.094369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.106757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.106784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.116848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.116883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.128277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.128305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.140329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.140355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.150255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.150298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.161751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.161777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.174270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.174297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.183634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.183661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.195103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.195130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.204715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.204742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.215772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.215799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.226365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.226392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.236971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.236999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.249441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.249467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.502 [2024-07-13 15:29:15.258518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.502 [2024-07-13 15:29:15.258544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.270022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.270051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.282613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.282639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.292743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.292770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.303974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.304002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.314367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.314408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.324701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.324728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.335419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.335446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.346288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.346315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.358170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.358198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.367749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.367786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.378749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.378776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.391646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.391673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.401265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.401292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.412456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.412482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.424816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.424844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.434833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.434886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.446200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.446227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.456949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.456977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.467305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.467333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.478234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.478261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.489308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.489336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.499818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.499844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.510687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.510715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:44.761 [2024-07-13 15:29:15.521521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:44.761 [2024-07-13 15:29:15.521548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.532350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.532377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.544929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.544957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.554503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.554529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.565970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.565998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.579014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.579050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.588667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.588694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.600236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.600263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.610710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.610736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.621633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.621660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.632409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.632435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.643481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.643508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.654384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.654411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.664922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.664949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.674385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.674411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.685959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.685987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.696096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.696123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.707549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.707575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.718285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.718311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.729689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.729715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.740701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.740729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.751734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.751760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.761981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.762010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.772601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.772629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.023 [2024-07-13 15:29:15.783474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.023 [2024-07-13 15:29:15.783523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.794165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.794191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.805379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.805405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.816281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.816307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.826762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.826789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.837585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.837612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.848541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.848568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.859785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.859828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.870421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.870448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.880877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.880904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.891651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.891678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.902573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.902600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.913453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.913480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.924215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.924242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.934737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.934764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.945442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.282 [2024-07-13 15:29:15.945469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.282 [2024-07-13 15:29:15.956155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:15.956181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:15.968809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:15.968836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:15.978711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:15.978737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:15.990454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:15.990488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:16.001680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:16.001707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:16.012571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:16.012598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:16.023166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:16.023193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:16.034187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:16.034214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.283 [2024-07-13 15:29:16.044979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.283 [2024-07-13 15:29:16.045007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.056089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.056117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.066753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.066780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.079445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.079471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.088826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.088854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.100460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.100487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.112588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.112614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.121440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.121467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.133997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.134026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.144976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.145003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.154281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.154312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.166098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.166125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.177242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.177271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.188972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.189000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.198946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.198981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.209553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.209580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.219731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.219758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.230359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.230385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.242984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.243011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.252632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.252659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.263954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.263982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.273899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.273926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.285256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.285282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.295673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.295704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.544 [2024-07-13 15:29:16.306550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.544 [2024-07-13 15:29:16.306576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.317451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.317479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.328171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.328213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.338734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.338761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.349256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.349282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.362055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.362083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.372319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.372346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.383745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.383772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.394292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.394318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.803 [2024-07-13 15:29:16.404863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.803 [2024-07-13 15:29:16.404898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.417287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.417314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.426838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.426888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.438278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.438304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.449116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.449159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.460213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.460240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.471556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.471583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.482509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.482536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.493126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.493171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.504521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.504548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.515521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.515548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.526255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.526282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.537248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.537275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.548013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.548041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:45.804 [2024-07-13 15:29:16.559180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:45.804 [2024-07-13 15:29:16.559208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.569322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.569350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.580028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.580057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.591135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.591186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.601600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.601627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.612050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.612077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.622204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.622232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.633013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.633041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.643885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.643913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.654744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.654781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.665957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.665985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.676810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.676836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.687938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.687966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.698966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.698993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.709175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.709202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.719163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.719189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.733980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.734009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.744264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.064 [2024-07-13 15:29:16.744291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.064 [2024-07-13 15:29:16.755045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.755072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.065 [2024-07-13 15:29:16.765745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.765771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.065 [2024-07-13 15:29:16.776582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.776610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.065 [2024-07-13 15:29:16.787540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.787567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.065 [2024-07-13 15:29:16.798601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.798628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.065 [2024-07-13 15:29:16.809529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.809555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.065 [2024-07-13 15:29:16.820240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.065 [2024-07-13 15:29:16.820267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.830843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.830894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.841878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.841905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.852753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.852779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.862542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.862568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.873561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.873587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.884489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.884515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.895397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.895424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.906067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.906094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.916825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.916874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.926991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.927018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.938113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.938140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.948764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.948790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.959689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.959716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.970629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.970656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.981208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.981235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:16.991662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:16.991689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.002675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:17.002702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.013467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:17.013494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.023823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:17.023864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.036510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:17.036537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.046492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:17.046519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.057803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.325 [2024-07-13 15:29:17.057829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.325 [2024-07-13 15:29:17.068383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.326 [2024-07-13 15:29:17.068410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.326 [2024-07-13 15:29:17.079022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.326 [2024-07-13 15:29:17.079050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.586 [2024-07-13 15:29:17.091489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.586 [2024-07-13 15:29:17.091517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.101628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.101655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.113081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.113109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.123660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.123688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.134688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.134715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.145368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.145394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.156089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.156117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.167342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.167369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.177841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.177891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.190261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.190288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.199792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.199819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.211445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.211472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.222405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.222439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.233630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.233657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.244359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.244385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.255074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.255101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.265822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.265872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.276710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.276737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.287390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.287416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.297019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.297046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.308254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.308282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.318783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.318810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.329266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.329294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.587 [2024-07-13 15:29:17.339992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.587 [2024-07-13 15:29:17.340020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.353070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.353098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.362366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.362393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.373396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.373423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.384222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.384249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.395114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.395156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.405595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.405623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.417691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.844 [2024-07-13 15:29:17.417718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.844 [2024-07-13 15:29:17.424028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.424062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 00:17:46.845 Latency(us) 00:17:46.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.845 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:46.845 Nvme1n1 : 5.01 11832.95 92.44 0.00 0.00 10802.77 4781.70 23884.23 00:17:46.845 =================================================================================================================== 00:17:46.845 Total : 11832.95 92.44 0.00 0.00 10802.77 4781.70 23884.23 00:17:46.845 [2024-07-13 15:29:17.432039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.432062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.440059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.440083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.448150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.448202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.456175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.456228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.464187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.464239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.472211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.472263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.480227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.480278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.488259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.488311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.496270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.496324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.504298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.504350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.512336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.512390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.520348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.520400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.528366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.528419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.536382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.536433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.544396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.544448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.552418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.552481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.560448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.560492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.568407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.568432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.576452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.576488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.584508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.584555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.592539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.592593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.600498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.600523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:46.845 [2024-07-13 15:29:17.608539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:46.845 [2024-07-13 15:29:17.608574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-13 15:29:17.616607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-13 15:29:17.616657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-13 15:29:17.624609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-13 15:29:17.624657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-13 15:29:17.632576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-13 15:29:17.632597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-13 15:29:17.640594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-13 15:29:17.640615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 [2024-07-13 15:29:17.648630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:47.102 [2024-07-13 15:29:17.648654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:47.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1099613) - No such process 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1099613 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.102 delay0 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.102 15:29:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:47.102 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.102 [2024-07-13 15:29:17.767119] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:53.704 Initializing NVMe Controllers 00:17:53.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.704 Initialization complete. Launching workers. 00:17:53.704 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:17:53.704 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 33 00:17:53.704 success 205, unsuccess 186, failed 0 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.704 15:29:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.704 rmmod nvme_tcp 00:17:53.704 rmmod nvme_fabrics 00:17:53.704 rmmod nvme_keyring 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1098397 ']' 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1098397 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1098397 ']' 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1098397 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1098397 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1098397' 00:17:53.704 killing process with pid 1098397 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1098397 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1098397 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.704 15:29:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.608 15:29:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.608 00:17:55.608 real 0m27.694s 00:17:55.608 user 0m40.540s 00:17:55.608 sys 0m8.503s 00:17:55.608 15:29:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.608 15:29:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:55.608 ************************************ 00:17:55.608 END TEST nvmf_zcopy 00:17:55.608 ************************************ 00:17:55.608 15:29:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.608 15:29:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:55.608 15:29:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.608 15:29:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.608 15:29:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.866 ************************************ 00:17:55.867 START TEST nvmf_nmic 00:17:55.867 ************************************ 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:55.867 * Looking for test storage... 00:17:55.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.867 15:29:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.768 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.768 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.768 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:17:57.769 00:17:57.769 --- 10.0.0.2 ping statistics --- 00:17:57.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.769 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:57.769 00:17:57.769 --- 10.0.0.1 ping statistics --- 00:17:57.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.769 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1102985 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1102985 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1102985 ']' 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.769 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.028 [2024-07-13 15:29:28.572986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:17:58.028 [2024-07-13 15:29:28.573060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.028 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.028 [2024-07-13 15:29:28.613724] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:58.028 [2024-07-13 15:29:28.646151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.028 [2024-07-13 15:29:28.745188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.028 [2024-07-13 15:29:28.745261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.028 [2024-07-13 15:29:28.745278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.028 [2024-07-13 15:29:28.745292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.028 [2024-07-13 15:29:28.745305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.028 [2024-07-13 15:29:28.745363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.028 [2024-07-13 15:29:28.745429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.028 [2024-07-13 15:29:28.745452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.028 [2024-07-13 15:29:28.745456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 [2024-07-13 15:29:28.902945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 Malloc0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 [2024-07-13 15:29:28.955633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:58.287 test case1: single bdev can't be used in multiple subsystems 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.287 [2024-07-13 15:29:28.979498] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:58.287 [2024-07-13 15:29:28.979528] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:58.287 [2024-07-13 15:29:28.979559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.287 request: 00:17:58.287 { 00:17:58.287 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:58.287 "namespace": { 00:17:58.287 "bdev_name": "Malloc0", 00:17:58.287 "no_auto_visible": false 00:17:58.287 }, 00:17:58.287 "method": "nvmf_subsystem_add_ns", 00:17:58.287 "req_id": 1 00:17:58.287 } 00:17:58.287 Got JSON-RPC error response 00:17:58.287 response: 00:17:58.287 { 00:17:58.287 "code": -32602, 00:17:58.287 "message": "Invalid parameters" 00:17:58.287 } 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:58.287 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:58.288 Adding namespace failed - expected result. 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:58.288 test case2: host connect to nvmf target in multiple paths 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.288 [2024-07-13 15:29:28.987602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.288 15:29:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.854 15:29:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:59.788 15:29:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.788 15:29:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:59.788 15:29:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.788 15:29:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:59.788 15:29:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:01.689 15:29:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:01.689 [global] 00:18:01.689 thread=1 00:18:01.689 invalidate=1 00:18:01.689 rw=write 00:18:01.689 time_based=1 00:18:01.689 runtime=1 00:18:01.689 ioengine=libaio 00:18:01.689 direct=1 00:18:01.689 bs=4096 00:18:01.689 iodepth=1 00:18:01.689 norandommap=0 00:18:01.689 numjobs=1 00:18:01.689 00:18:01.689 verify_dump=1 00:18:01.689 verify_backlog=512 00:18:01.689 verify_state_save=0 00:18:01.689 do_verify=1 00:18:01.689 verify=crc32c-intel 00:18:01.689 [job0] 00:18:01.689 filename=/dev/nvme0n1 00:18:01.689 Could not set queue depth (nvme0n1) 00:18:01.947 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.947 fio-3.35 00:18:01.947 Starting 1 thread 00:18:03.323 00:18:03.323 job0: (groupid=0, jobs=1): err= 0: pid=1103498: Sat Jul 13 15:29:33 2024 00:18:03.323 read: IOPS=18, BW=74.0KiB/s (75.8kB/s)(76.0KiB/1027msec) 00:18:03.323 slat (nsec): min=14792, max=35273, avg=18258.21, stdev=7153.19 00:18:03.323 clat (usec): min=40977, max=42253, avg=41941.65, stdev=243.65 00:18:03.323 lat (usec): min=40993, max=42268, avg=41959.91, stdev=243.98 00:18:03.323 clat percentiles (usec): 00:18:03.323 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:03.323 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:03.323 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:03.323 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:03.323 | 99.99th=[42206] 00:18:03.323 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:18:03.323 slat (usec): min=7, max=28764, avg=82.04, stdev=1270.14 00:18:03.323 clat (usec): min=182, max=535, avg=359.74, stdev=75.62 00:18:03.323 lat (usec): min=198, max=29074, avg=441.78, stdev=1270.57 00:18:03.323 clat percentiles (usec): 00:18:03.323 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 251], 20.00th=[ 302], 00:18:03.323 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 383], 00:18:03.323 | 70.00th=[ 404], 80.00th=[ 416], 90.00th=[ 449], 95.00th=[ 478], 00:18:03.323 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 537], 99.95th=[ 537], 00:18:03.323 | 99.99th=[ 537] 00:18:03.323 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:03.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:03.323 lat (usec) : 250=9.60%, 500=85.50%, 750=1.32% 00:18:03.323 lat (msec) : 50=3.58% 00:18:03.323 cpu : usr=1.07%, sys=1.27%, ctx=534, majf=0, minf=2 00:18:03.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.323 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.323 00:18:03.323 Run status group 0 (all jobs): 00:18:03.323 READ: bw=74.0KiB/s (75.8kB/s), 74.0KiB/s-74.0KiB/s (75.8kB/s-75.8kB/s), io=76.0KiB (77.8kB), run=1027-1027msec 00:18:03.323 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:18:03.323 00:18:03.323 Disk stats (read/write): 00:18:03.323 nvme0n1: ios=42/512, merge=0/0, ticks=1614/162, in_queue=1776, util=98.50% 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:03.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.323 rmmod nvme_tcp 00:18:03.323 rmmod nvme_fabrics 00:18:03.323 rmmod nvme_keyring 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1102985 ']' 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1102985 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1102985 ']' 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1102985 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1102985 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.323 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1102985' 00:18:03.323 killing process with pid 1102985 00:18:03.324 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1102985 00:18:03.324 15:29:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1102985 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.582 15:29:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.485 15:29:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.485 00:18:05.485 real 0m9.784s 00:18:05.485 user 0m22.257s 00:18:05.485 sys 0m2.253s 00:18:05.485 15:29:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.485 15:29:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:05.485 ************************************ 00:18:05.485 END TEST nvmf_nmic 00:18:05.485 ************************************ 00:18:05.485 15:29:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:05.485 15:29:36 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:05.485 15:29:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:05.485 15:29:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.485 15:29:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.485 ************************************ 00:18:05.485 START TEST nvmf_fio_target 00:18:05.485 ************************************ 00:18:05.485 15:29:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:05.742 * Looking for test storage... 00:18:05.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.742 15:29:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.639 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:18:07.640 00:18:07.640 --- 10.0.0.2 ping statistics --- 00:18:07.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.640 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:18:07.640 00:18:07.640 --- 10.0.0.1 ping statistics --- 00:18:07.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.640 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1105667 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1105667 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1105667 ']' 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.640 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.898 [2024-07-13 15:29:38.420040] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:07.898 [2024-07-13 15:29:38.420129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.898 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.898 [2024-07-13 15:29:38.462620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:07.898 [2024-07-13 15:29:38.490413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.898 [2024-07-13 15:29:38.582054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.898 [2024-07-13 15:29:38.582109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.898 [2024-07-13 15:29:38.582139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.898 [2024-07-13 15:29:38.582151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.898 [2024-07-13 15:29:38.582162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.898 [2024-07-13 15:29:38.582214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.898 [2024-07-13 15:29:38.582277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.898 [2024-07-13 15:29:38.582300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.898 [2024-07-13 15:29:38.582304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.155 15:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:08.412 [2024-07-13 15:29:39.011656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.412 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.669 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:08.669 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.927 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:08.927 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:09.184 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:09.184 15:29:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:09.441 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:09.441 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:09.698 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:09.962 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:09.962 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.241 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:10.241 15:29:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.499 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:10.499 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:10.757 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:11.015 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:11.015 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.273 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:11.273 15:29:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:11.531 15:29:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.788 [2024-07-13 15:29:42.431860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.788 15:29:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:12.046 15:29:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:12.303 15:29:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.866 15:29:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:12.866 15:29:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:12.866 15:29:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.866 15:29:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:12.866 15:29:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:12.866 15:29:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:15.393 15:29:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:15.393 [global] 00:18:15.393 thread=1 00:18:15.393 invalidate=1 00:18:15.393 rw=write 00:18:15.393 time_based=1 00:18:15.393 runtime=1 00:18:15.393 ioengine=libaio 00:18:15.393 direct=1 00:18:15.393 bs=4096 00:18:15.393 iodepth=1 00:18:15.393 norandommap=0 00:18:15.393 numjobs=1 00:18:15.393 00:18:15.393 verify_dump=1 00:18:15.393 verify_backlog=512 00:18:15.393 verify_state_save=0 00:18:15.393 do_verify=1 00:18:15.393 verify=crc32c-intel 00:18:15.393 [job0] 00:18:15.393 filename=/dev/nvme0n1 00:18:15.393 [job1] 00:18:15.393 filename=/dev/nvme0n2 00:18:15.393 [job2] 00:18:15.393 filename=/dev/nvme0n3 00:18:15.393 [job3] 00:18:15.393 filename=/dev/nvme0n4 00:18:15.393 Could not set queue depth (nvme0n1) 00:18:15.393 Could not set queue depth (nvme0n2) 00:18:15.393 Could not set queue depth (nvme0n3) 00:18:15.393 Could not set queue depth (nvme0n4) 00:18:15.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.393 fio-3.35 00:18:15.393 Starting 4 threads 00:18:16.328 00:18:16.328 job0: (groupid=0, jobs=1): err= 0: pid=1106637: Sat Jul 13 15:29:47 2024 00:18:16.328 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:18:16.328 slat (nsec): min=7619, max=33438, avg=17245.95, stdev=8195.39 00:18:16.328 clat (usec): min=40878, max=42002, avg=41082.44, stdev=309.26 00:18:16.328 lat (usec): min=40911, max=42018, avg=41099.69, stdev=307.66 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:16.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:16.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:18:16.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:16.328 | 99.99th=[42206] 00:18:16.328 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:16.328 slat (nsec): min=8125, max=74442, avg=22394.82, stdev=11978.18 00:18:16.328 clat (usec): min=184, max=499, avg=267.06, stdev=60.97 00:18:16.328 lat (usec): min=195, max=553, avg=289.46, stdev=61.45 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:18:16.328 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 245], 60.00th=[ 269], 00:18:16.328 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 383], 00:18:16.328 | 99.00th=[ 449], 99.50th=[ 453], 99.90th=[ 498], 99.95th=[ 498], 00:18:16.328 | 99.99th=[ 498] 00:18:16.328 bw ( KiB/s): min= 4087, max= 4087, per=51.94%, avg=4087.00, stdev= 0.00, samples=1 00:18:16.328 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:16.328 lat (usec) : 250=51.03%, 500=45.03% 00:18:16.328 lat (msec) : 50=3.94% 00:18:16.328 cpu : usr=0.79%, sys=0.79%, ctx=535, majf=0, minf=1 00:18:16.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.328 job1: (groupid=0, jobs=1): err= 0: pid=1106638: Sat Jul 13 15:29:47 2024 00:18:16.328 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:18:16.328 slat (nsec): min=10588, max=34514, avg=21934.39, stdev=8772.77 00:18:16.328 clat (usec): min=805, max=42057, avg=39299.54, stdev=8397.11 00:18:16.328 lat (usec): min=831, max=42092, avg=39321.47, stdev=8396.36 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[ 807], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:18:16.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:16.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:18:16.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:16.328 | 99.99th=[42206] 00:18:16.328 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:18:16.328 slat (nsec): min=9022, max=70972, avg=18964.87, stdev=7767.44 00:18:16.328 clat (usec): min=199, max=446, avg=243.21, stdev=35.25 00:18:16.328 lat (usec): min=211, max=514, avg=262.18, stdev=37.33 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 223], 00:18:16.328 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 237], 00:18:16.328 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 281], 95.00th=[ 318], 00:18:16.328 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 449], 99.95th=[ 449], 00:18:16.328 | 99.99th=[ 449] 00:18:16.328 bw ( KiB/s): min= 4087, max= 4087, per=51.94%, avg=4087.00, stdev= 0.00, samples=1 00:18:16.328 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:16.328 lat (usec) : 250=76.26%, 500=19.44%, 1000=0.19% 00:18:16.328 lat (msec) : 50=4.11% 00:18:16.328 cpu : usr=0.87%, sys=1.06%, ctx=535, majf=0, minf=1 00:18:16.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.328 job2: (groupid=0, jobs=1): err= 0: pid=1106639: Sat Jul 13 15:29:47 2024 00:18:16.328 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:18:16.328 slat (nsec): min=9925, max=39617, avg=21348.48, stdev=9965.69 00:18:16.328 clat (usec): min=40734, max=42043, avg=41005.37, stdev=247.12 00:18:16.328 lat (usec): min=40744, max=42056, avg=41026.72, stdev=246.00 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:16.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:16.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:16.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:16.328 | 99.99th=[42206] 00:18:16.328 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:16.328 slat (nsec): min=9741, max=76837, avg=22354.05, stdev=10502.11 00:18:16.328 clat (usec): min=197, max=447, avg=269.90, stdev=46.71 00:18:16.328 lat (usec): min=209, max=489, avg=292.26, stdev=51.09 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 239], 00:18:16.328 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:18:16.328 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 351], 95.00th=[ 379], 00:18:16.328 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 449], 99.95th=[ 449], 00:18:16.328 | 99.99th=[ 449] 00:18:16.328 bw ( KiB/s): min= 4087, max= 4087, per=51.94%, avg=4087.00, stdev= 0.00, samples=1 00:18:16.328 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:18:16.328 lat (usec) : 250=36.96%, 500=59.10% 00:18:16.328 lat (msec) : 50=3.94% 00:18:16.328 cpu : usr=0.89%, sys=1.18%, ctx=534, majf=0, minf=2 00:18:16.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.328 job3: (groupid=0, jobs=1): err= 0: pid=1106640: Sat Jul 13 15:29:47 2024 00:18:16.328 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:18:16.328 slat (nsec): min=13236, max=34157, avg=19044.57, stdev=8094.87 00:18:16.328 clat (usec): min=40686, max=42027, avg=41152.73, stdev=398.84 00:18:16.328 lat (usec): min=40701, max=42041, avg=41171.77, stdev=399.08 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:16.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:16.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:18:16.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:16.328 | 99.99th=[42206] 00:18:16.328 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:16.328 slat (nsec): min=7646, max=62410, avg=19285.84, stdev=9072.07 00:18:16.328 clat (usec): min=201, max=551, avg=242.89, stdev=38.64 00:18:16.328 lat (usec): min=212, max=574, avg=262.18, stdev=38.99 00:18:16.328 clat percentiles (usec): 00:18:16.328 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:18:16.328 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:18:16.328 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 326], 00:18:16.328 | 99.00th=[ 371], 99.50th=[ 461], 99.90th=[ 553], 99.95th=[ 553], 00:18:16.328 | 99.99th=[ 553] 00:18:16.328 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.328 lat (usec) : 250=75.05%, 500=20.64%, 750=0.38% 00:18:16.328 lat (msec) : 50=3.94% 00:18:16.328 cpu : usr=0.40%, sys=1.00%, ctx=534, majf=0, minf=1 00:18:16.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.328 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.328 00:18:16.328 Run status group 0 (all jobs): 00:18:16.328 READ: bw=330KiB/s (338kB/s), 82.8KiB/s-88.4KiB/s (84.8kB/s-90.5kB/s), io=344KiB (352kB), run=1001-1041msec 00:18:16.328 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2046KiB/s (2015kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1041msec 00:18:16.328 00:18:16.328 Disk stats (read/write): 00:18:16.328 nvme0n1: ios=39/512, merge=0/0, ticks=1526/132, in_queue=1658, util=84.57% 00:18:16.328 nvme0n2: ios=68/512, merge=0/0, ticks=767/127, in_queue=894, util=89.90% 00:18:16.328 nvme0n3: ios=45/512, merge=0/0, ticks=1532/126, in_queue=1658, util=92.41% 00:18:16.328 nvme0n4: ios=74/512, merge=0/0, ticks=1445/107, in_queue=1552, util=94.14% 00:18:16.328 15:29:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:16.328 [global] 00:18:16.328 thread=1 00:18:16.328 invalidate=1 00:18:16.328 rw=randwrite 00:18:16.328 time_based=1 00:18:16.328 runtime=1 00:18:16.328 ioengine=libaio 00:18:16.328 direct=1 00:18:16.328 bs=4096 00:18:16.328 iodepth=1 00:18:16.328 norandommap=0 00:18:16.328 numjobs=1 00:18:16.328 00:18:16.587 verify_dump=1 00:18:16.587 verify_backlog=512 00:18:16.587 verify_state_save=0 00:18:16.587 do_verify=1 00:18:16.587 verify=crc32c-intel 00:18:16.587 [job0] 00:18:16.587 filename=/dev/nvme0n1 00:18:16.587 [job1] 00:18:16.587 filename=/dev/nvme0n2 00:18:16.587 [job2] 00:18:16.587 filename=/dev/nvme0n3 00:18:16.587 [job3] 00:18:16.587 filename=/dev/nvme0n4 00:18:16.587 Could not set queue depth (nvme0n1) 00:18:16.587 Could not set queue depth (nvme0n2) 00:18:16.587 Could not set queue depth (nvme0n3) 00:18:16.587 Could not set queue depth (nvme0n4) 00:18:16.587 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.587 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.587 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.587 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:16.587 fio-3.35 00:18:16.587 Starting 4 threads 00:18:17.959 00:18:17.959 job0: (groupid=0, jobs=1): err= 0: pid=1106909: Sat Jul 13 15:29:48 2024 00:18:17.959 read: IOPS=168, BW=673KiB/s (689kB/s)(684KiB/1017msec) 00:18:17.959 slat (nsec): min=7436, max=32854, avg=15906.77, stdev=2729.42 00:18:17.959 clat (usec): min=305, max=42014, avg=4673.68, stdev=12539.78 00:18:17.959 lat (usec): min=322, max=42029, avg=4689.58, stdev=12540.02 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 310], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 371], 00:18:17.959 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 383], 00:18:17.959 | 70.00th=[ 388], 80.00th=[ 392], 90.00th=[40633], 95.00th=[41157], 00:18:17.959 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:17.959 | 99.99th=[42206] 00:18:17.959 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:18:17.959 slat (nsec): min=7116, max=75949, avg=20430.38, stdev=10472.79 00:18:17.959 clat (usec): min=184, max=3997, avg=392.06, stdev=203.01 00:18:17.959 lat (usec): min=192, max=4019, avg=412.49, stdev=202.71 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 223], 20.00th=[ 235], 00:18:17.959 | 30.00th=[ 269], 40.00th=[ 367], 50.00th=[ 404], 60.00th=[ 429], 00:18:17.959 | 70.00th=[ 465], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 570], 00:18:17.959 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 3982], 99.95th=[ 3982], 00:18:17.959 | 99.99th=[ 3982] 00:18:17.959 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=19.77%, 500=60.61%, 750=16.69%, 1000=0.15% 00:18:17.959 lat (msec) : 4=0.15%, 50=2.64% 00:18:17.959 cpu : usr=1.38%, sys=1.28%, ctx=683, majf=0, minf=1 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=171,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 job1: (groupid=0, jobs=1): err= 0: pid=1106930: Sat Jul 13 15:29:48 2024 00:18:17.959 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:18:17.959 slat (nsec): min=6499, max=33826, avg=18378.27, stdev=7610.72 00:18:17.959 clat (usec): min=26817, max=41059, avg=40319.47, stdev=3016.28 00:18:17.959 lat (usec): min=26833, max=41075, avg=40337.85, stdev=3016.99 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[26870], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:17.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:17.959 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:17.959 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:17.959 | 99.99th=[41157] 00:18:17.959 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:17.959 slat (nsec): min=6129, max=59791, avg=10287.30, stdev=5540.30 00:18:17.959 clat (usec): min=190, max=445, avg=233.83, stdev=30.98 00:18:17.959 lat (usec): min=197, max=452, avg=244.12, stdev=31.49 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:18:17.959 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:18:17.959 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 281], 00:18:17.959 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 445], 99.95th=[ 445], 00:18:17.959 | 99.99th=[ 445] 00:18:17.959 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=74.53%, 500=21.35% 00:18:17.959 lat (msec) : 50=4.12% 00:18:17.959 cpu : usr=0.30%, sys=0.59%, ctx=535, majf=0, minf=1 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 job2: (groupid=0, jobs=1): err= 0: pid=1106964: Sat Jul 13 15:29:48 2024 00:18:17.959 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:18:17.959 slat (nsec): min=7632, max=35574, avg=18836.68, stdev=7689.65 00:18:17.959 clat (usec): min=40925, max=41993, avg=41521.06, stdev=508.96 00:18:17.959 lat (usec): min=40959, max=42009, avg=41539.90, stdev=508.96 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:17.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:18:17.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:17.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:17.959 | 99.99th=[42206] 00:18:17.959 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:17.959 slat (nsec): min=6696, max=34610, avg=9873.24, stdev=4430.84 00:18:17.959 clat (usec): min=200, max=351, avg=227.52, stdev=14.91 00:18:17.959 lat (usec): min=207, max=360, avg=237.40, stdev=15.92 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:18:17.959 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:18:17.959 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 249], 00:18:17.959 | 99.00th=[ 269], 99.50th=[ 302], 99.90th=[ 351], 99.95th=[ 351], 00:18:17.959 | 99.99th=[ 351] 00:18:17.959 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=91.76%, 500=4.12% 00:18:17.959 lat (msec) : 50=4.12% 00:18:17.959 cpu : usr=0.19%, sys=0.48%, ctx=535, majf=0, minf=1 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 job3: (groupid=0, jobs=1): err= 0: pid=1106977: Sat Jul 13 15:29:48 2024 00:18:17.959 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:18:17.959 slat (nsec): min=15065, max=33608, avg=17475.62, stdev=5385.32 00:18:17.959 clat (usec): min=461, max=42226, avg=39224.49, stdev=8890.72 00:18:17.959 lat (usec): min=480, max=42243, avg=39241.96, stdev=8890.36 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 461], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:17.959 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:17.959 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:18:17.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:17.959 | 99.99th=[42206] 00:18:17.959 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:18:17.959 slat (nsec): min=6138, max=53781, avg=16470.97, stdev=7591.34 00:18:17.959 clat (usec): min=191, max=660, avg=347.93, stdev=138.99 00:18:17.959 lat (usec): min=198, max=670, avg=364.40, stdev=139.47 00:18:17.959 clat percentiles (usec): 00:18:17.959 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:18:17.959 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 277], 60.00th=[ 412], 00:18:17.959 | 70.00th=[ 453], 80.00th=[ 502], 90.00th=[ 553], 95.00th=[ 578], 00:18:17.959 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 660], 00:18:17.959 | 99.99th=[ 660] 00:18:17.959 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:18:17.959 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:17.959 lat (usec) : 250=46.34%, 500=30.02%, 750=19.89% 00:18:17.959 lat (msec) : 50=3.75% 00:18:17.959 cpu : usr=0.49%, sys=0.69%, ctx=533, majf=0, minf=2 00:18:17.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.959 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.959 00:18:17.959 Run status group 0 (all jobs): 00:18:17.959 READ: bw=910KiB/s (932kB/s), 82.9KiB/s-673KiB/s (84.9kB/s-689kB/s), io=944KiB (967kB), run=1013-1037msec 00:18:17.959 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2022KiB/s (2022kB/s-2070kB/s), io=8192KiB (8389kB), run=1013-1037msec 00:18:17.959 00:18:17.959 Disk stats (read/write): 00:18:17.959 nvme0n1: ios=216/512, merge=0/0, ticks=624/197, in_queue=821, util=86.87% 00:18:17.959 nvme0n2: ios=56/512, merge=0/0, ticks=770/114, in_queue=884, util=89.33% 00:18:17.959 nvme0n3: ios=64/512, merge=0/0, ticks=1153/116, in_queue=1269, util=97.80% 00:18:17.959 nvme0n4: ios=15/512, merge=0/0, ticks=619/176, in_queue=795, util=89.53% 00:18:17.959 15:29:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:17.959 [global] 00:18:17.959 thread=1 00:18:17.959 invalidate=1 00:18:17.959 rw=write 00:18:17.959 time_based=1 00:18:17.959 runtime=1 00:18:17.959 ioengine=libaio 00:18:17.959 direct=1 00:18:17.959 bs=4096 00:18:17.959 iodepth=128 00:18:17.959 norandommap=0 00:18:17.959 numjobs=1 00:18:17.959 00:18:17.960 verify_dump=1 00:18:17.960 verify_backlog=512 00:18:17.960 verify_state_save=0 00:18:17.960 do_verify=1 00:18:17.960 verify=crc32c-intel 00:18:17.960 [job0] 00:18:17.960 filename=/dev/nvme0n1 00:18:17.960 [job1] 00:18:17.960 filename=/dev/nvme0n2 00:18:17.960 [job2] 00:18:17.960 filename=/dev/nvme0n3 00:18:17.960 [job3] 00:18:17.960 filename=/dev/nvme0n4 00:18:17.960 Could not set queue depth (nvme0n1) 00:18:17.960 Could not set queue depth (nvme0n2) 00:18:17.960 Could not set queue depth (nvme0n3) 00:18:17.960 Could not set queue depth (nvme0n4) 00:18:18.217 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.217 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.217 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.217 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.217 fio-3.35 00:18:18.217 Starting 4 threads 00:18:19.592 00:18:19.592 job0: (groupid=0, jobs=1): err= 0: pid=1107222: Sat Jul 13 15:29:49 2024 00:18:19.592 read: IOPS=2918, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1007msec) 00:18:19.592 slat (usec): min=3, max=19711, avg=169.34, stdev=1222.39 00:18:19.592 clat (usec): min=4921, max=47295, avg=21882.94, stdev=4938.66 00:18:19.592 lat (usec): min=10359, max=47312, avg=22052.29, stdev=5041.52 00:18:19.592 clat percentiles (usec): 00:18:19.593 | 1.00th=[10421], 5.00th=[12780], 10.00th=[15270], 20.00th=[18220], 00:18:19.593 | 30.00th=[19792], 40.00th=[20579], 50.00th=[22152], 60.00th=[22676], 00:18:19.593 | 70.00th=[24249], 80.00th=[26084], 90.00th=[28181], 95.00th=[28705], 00:18:19.593 | 99.00th=[34866], 99.50th=[39060], 99.90th=[39060], 99.95th=[45351], 00:18:19.593 | 99.99th=[47449] 00:18:19.593 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:18:19.593 slat (usec): min=4, max=12546, avg=155.22, stdev=881.02 00:18:19.593 clat (usec): min=4156, max=58358, avg=20394.39, stdev=8639.74 00:18:19.593 lat (usec): min=4167, max=58376, avg=20549.61, stdev=8694.93 00:18:19.593 clat percentiles (usec): 00:18:19.593 | 1.00th=[ 5342], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[12780], 00:18:19.593 | 30.00th=[15795], 40.00th=[19530], 50.00th=[21627], 60.00th=[22938], 00:18:19.593 | 70.00th=[23200], 80.00th=[23462], 90.00th=[27657], 95.00th=[40109], 00:18:19.593 | 99.00th=[51643], 99.50th=[52691], 99.90th=[58459], 99.95th=[58459], 00:18:19.593 | 99.99th=[58459] 00:18:19.593 bw ( KiB/s): min=12288, max=12288, per=19.90%, avg=12288.00, stdev= 0.00, samples=2 00:18:19.593 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:19.593 lat (msec) : 10=5.36%, 20=32.94%, 50=60.92%, 100=0.78% 00:18:19.593 cpu : usr=3.48%, sys=5.37%, ctx=269, majf=0, minf=1 00:18:19.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:19.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.593 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.593 job1: (groupid=0, jobs=1): err= 0: pid=1107223: Sat Jul 13 15:29:49 2024 00:18:19.593 read: IOPS=4211, BW=16.5MiB/s (17.2MB/s)(16.6MiB/1007msec) 00:18:19.593 slat (usec): min=3, max=19361, avg=140.54, stdev=1076.52 00:18:19.593 clat (msec): min=3, max=181, avg=15.21, stdev=16.80 00:18:19.593 lat (msec): min=4, max=181, avg=15.35, stdev=17.00 00:18:19.593 clat percentiles (msec): 00:18:19.593 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:18:19.593 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:18:19.593 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 27], 00:18:19.593 | 99.00th=[ 116], 99.50th=[ 142], 99.90th=[ 182], 99.95th=[ 182], 00:18:19.593 | 99.99th=[ 182] 00:18:19.593 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:18:19.593 slat (usec): min=4, max=14259, avg=79.64, stdev=515.49 00:18:19.593 clat (usec): min=1603, max=181737, avg=13704.85, stdev=18378.05 00:18:19.593 lat (usec): min=1630, max=181750, avg=13784.49, stdev=18413.29 00:18:19.593 clat percentiles (msec): 00:18:19.593 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:18:19.593 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:18:19.593 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 15], 95.00th=[ 24], 00:18:19.593 | 99.00th=[ 133], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 176], 00:18:19.593 | 99.99th=[ 182] 00:18:19.593 bw ( KiB/s): min=12288, max=24576, per=29.85%, avg=18432.00, stdev=8688.93, samples=2 00:18:19.593 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:18:19.593 lat (msec) : 2=0.02%, 4=0.46%, 10=25.26%, 20=63.61%, 50=8.40% 00:18:19.593 lat (msec) : 100=0.81%, 250=1.44% 00:18:19.593 cpu : usr=5.67%, sys=7.16%, ctx=489, majf=0, minf=1 00:18:19.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:19.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.593 issued rwts: total=4241,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.593 job2: (groupid=0, jobs=1): err= 0: pid=1107224: Sat Jul 13 15:29:49 2024 00:18:19.593 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:18:19.593 slat (usec): min=2, max=14304, avg=121.21, stdev=878.69 00:18:19.593 clat (usec): min=5327, max=37259, avg=15586.99, stdev=4729.24 00:18:19.593 lat (usec): min=5333, max=37274, avg=15708.20, stdev=4792.53 00:18:19.593 clat percentiles (usec): 00:18:19.593 | 1.00th=[ 6718], 5.00th=[10683], 10.00th=[11207], 20.00th=[12256], 00:18:19.593 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[15533], 00:18:19.593 | 70.00th=[16909], 80.00th=[20579], 90.00th=[22938], 95.00th=[24511], 00:18:19.593 | 99.00th=[29492], 99.50th=[31065], 99.90th=[31327], 99.95th=[35914], 00:18:19.593 | 99.99th=[37487] 00:18:19.593 write: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(15.1MiB/1014msec); 0 zone resets 00:18:19.593 slat (usec): min=3, max=16659, avg=136.38, stdev=946.21 00:18:19.593 clat (usec): min=1495, max=166151, avg=18745.81, stdev=24897.65 00:18:19.593 lat (usec): min=1509, max=166156, avg=18882.19, stdev=25052.67 00:18:19.593 clat percentiles (msec): 00:18:19.593 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:18:19.593 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:18:19.593 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 68], 00:18:19.593 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 167], 00:18:19.593 | 99.99th=[ 167] 00:18:19.593 bw ( KiB/s): min= 9520, max=20480, per=24.29%, avg=15000.00, stdev=7749.89, samples=2 00:18:19.593 iops : min= 2380, max= 5120, avg=3750.00, stdev=1937.47, samples=2 00:18:19.593 lat (msec) : 2=0.04%, 4=0.32%, 10=11.99%, 20=71.64%, 50=12.91% 00:18:19.593 lat (msec) : 100=1.39%, 250=1.70% 00:18:19.593 cpu : usr=4.74%, sys=5.53%, ctx=390, majf=0, minf=1 00:18:19.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:19.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.593 issued rwts: total=3584,3878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.593 job3: (groupid=0, jobs=1): err= 0: pid=1107225: Sat Jul 13 15:29:49 2024 00:18:19.593 read: IOPS=4003, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1002msec) 00:18:19.593 slat (usec): min=2, max=13139, avg=123.78, stdev=804.20 00:18:19.593 clat (usec): min=803, max=49097, avg=15037.83, stdev=6462.93 00:18:19.593 lat (usec): min=3174, max=49100, avg=15161.61, stdev=6537.04 00:18:19.593 clat percentiles (usec): 00:18:19.593 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10028], 00:18:19.593 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11994], 60.00th=[14091], 00:18:19.593 | 70.00th=[17957], 80.00th=[21627], 90.00th=[23725], 95.00th=[26870], 00:18:19.593 | 99.00th=[34866], 99.50th=[37487], 99.90th=[44303], 99.95th=[49021], 00:18:19.593 | 99.99th=[49021] 00:18:19.593 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:18:19.593 slat (usec): min=3, max=11870, avg=115.27, stdev=664.04 00:18:19.593 clat (usec): min=4931, max=40774, avg=16129.07, stdev=6911.82 00:18:19.593 lat (usec): min=4940, max=40799, avg=16244.35, stdev=6965.58 00:18:19.593 clat percentiles (usec): 00:18:19.593 | 1.00th=[ 6456], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10290], 00:18:19.593 | 30.00th=[10683], 40.00th=[11076], 50.00th=[13435], 60.00th=[17171], 00:18:19.593 | 70.00th=[22152], 80.00th=[23200], 90.00th=[23462], 95.00th=[26346], 00:18:19.593 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:18:19.593 | 99.99th=[40633] 00:18:19.593 bw ( KiB/s): min=12312, max=20480, per=26.55%, avg=16396.00, stdev=5775.65, samples=2 00:18:19.593 iops : min= 3078, max= 5120, avg=4099.00, stdev=1443.91, samples=2 00:18:19.593 lat (usec) : 1000=0.01% 00:18:19.593 lat (msec) : 4=0.36%, 10=16.71%, 20=50.74%, 50=32.18% 00:18:19.593 cpu : usr=4.60%, sys=6.19%, ctx=393, majf=0, minf=1 00:18:19.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:19.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.593 issued rwts: total=4012,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.593 00:18:19.593 Run status group 0 (all jobs): 00:18:19.593 READ: bw=56.9MiB/s (59.7MB/s), 11.4MiB/s-16.5MiB/s (12.0MB/s-17.2MB/s), io=57.7MiB (60.5MB), run=1002-1014msec 00:18:19.593 WRITE: bw=60.3MiB/s (63.2MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.7MB/s), io=61.1MiB (64.1MB), run=1002-1014msec 00:18:19.593 00:18:19.593 Disk stats (read/write): 00:18:19.593 nvme0n1: ios=2299/2560, merge=0/0, ticks=34194/31440, in_queue=65634, util=96.09% 00:18:19.593 nvme0n2: ios=3757/4096, merge=0/0, ticks=51050/52377, in_queue=103427, util=98.07% 00:18:19.593 nvme0n3: ios=3630/3631, merge=0/0, ticks=51581/45757, in_queue=97338, util=97.81% 00:18:19.593 nvme0n4: ios=3098/3244, merge=0/0, ticks=26424/26366, in_queue=52790, util=98.00% 00:18:19.593 15:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:19.593 [global] 00:18:19.593 thread=1 00:18:19.593 invalidate=1 00:18:19.593 rw=randwrite 00:18:19.593 time_based=1 00:18:19.593 runtime=1 00:18:19.593 ioengine=libaio 00:18:19.593 direct=1 00:18:19.593 bs=4096 00:18:19.593 iodepth=128 00:18:19.593 norandommap=0 00:18:19.593 numjobs=1 00:18:19.593 00:18:19.593 verify_dump=1 00:18:19.593 verify_backlog=512 00:18:19.593 verify_state_save=0 00:18:19.593 do_verify=1 00:18:19.593 verify=crc32c-intel 00:18:19.593 [job0] 00:18:19.593 filename=/dev/nvme0n1 00:18:19.593 [job1] 00:18:19.593 filename=/dev/nvme0n2 00:18:19.593 [job2] 00:18:19.593 filename=/dev/nvme0n3 00:18:19.593 [job3] 00:18:19.593 filename=/dev/nvme0n4 00:18:19.593 Could not set queue depth (nvme0n1) 00:18:19.593 Could not set queue depth (nvme0n2) 00:18:19.593 Could not set queue depth (nvme0n3) 00:18:19.593 Could not set queue depth (nvme0n4) 00:18:19.593 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.593 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.593 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.593 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:19.593 fio-3.35 00:18:19.593 Starting 4 threads 00:18:20.970 00:18:20.970 job0: (groupid=0, jobs=1): err= 0: pid=1107455: Sat Jul 13 15:29:51 2024 00:18:20.970 read: IOPS=5321, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1008msec) 00:18:20.970 slat (usec): min=2, max=16091, avg=96.74, stdev=692.63 00:18:20.970 clat (usec): min=3712, max=27362, avg=12185.11, stdev=3305.89 00:18:20.970 lat (usec): min=4511, max=27375, avg=12281.84, stdev=3340.93 00:18:20.970 clat percentiles (usec): 00:18:20.970 | 1.00th=[ 5407], 5.00th=[ 7963], 10.00th=[ 9241], 20.00th=[10028], 00:18:20.970 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11076], 60.00th=[11469], 00:18:20.970 | 70.00th=[12911], 80.00th=[15008], 90.00th=[17695], 95.00th=[18744], 00:18:20.970 | 99.00th=[20579], 99.50th=[21890], 99.90th=[23462], 99.95th=[23462], 00:18:20.970 | 99.99th=[27395] 00:18:20.970 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:18:20.970 slat (usec): min=3, max=14814, avg=77.70, stdev=430.80 00:18:20.970 clat (usec): min=2749, max=29020, avg=11062.80, stdev=3048.94 00:18:20.970 lat (usec): min=2766, max=29040, avg=11140.50, stdev=3074.84 00:18:20.970 clat percentiles (usec): 00:18:20.970 | 1.00th=[ 3752], 5.00th=[ 5669], 10.00th=[ 6718], 20.00th=[ 8979], 00:18:20.970 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:18:20.970 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13042], 95.00th=[14746], 00:18:20.971 | 99.00th=[22676], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:18:20.971 | 99.99th=[28967] 00:18:20.971 bw ( KiB/s): min=20848, max=24208, per=33.15%, avg=22528.00, stdev=2375.88, samples=2 00:18:20.971 iops : min= 5212, max= 6052, avg=5632.00, stdev=593.97, samples=2 00:18:20.971 lat (msec) : 4=0.82%, 10=20.53%, 20=76.33%, 50=2.33% 00:18:20.971 cpu : usr=6.36%, sys=8.34%, ctx=684, majf=0, minf=1 00:18:20.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:20.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.971 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.971 job1: (groupid=0, jobs=1): err= 0: pid=1107456: Sat Jul 13 15:29:51 2024 00:18:20.971 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:18:20.971 slat (usec): min=3, max=17080, avg=132.71, stdev=877.30 00:18:20.971 clat (usec): min=6734, max=82476, avg=17288.86, stdev=8361.35 00:18:20.971 lat (usec): min=7960, max=88103, avg=17421.57, stdev=8425.20 00:18:20.971 clat percentiles (usec): 00:18:20.971 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[11076], 20.00th=[12125], 00:18:20.971 | 30.00th=[13042], 40.00th=[14091], 50.00th=[15139], 60.00th=[15664], 00:18:20.971 | 70.00th=[16581], 80.00th=[19530], 90.00th=[25822], 95.00th=[35914], 00:18:20.971 | 99.00th=[42730], 99.50th=[74974], 99.90th=[82314], 99.95th=[82314], 00:18:20.971 | 99.99th=[82314] 00:18:20.971 write: IOPS=3093, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1005msec); 0 zone resets 00:18:20.971 slat (usec): min=4, max=26544, avg=182.44, stdev=1173.66 00:18:20.971 clat (msec): min=2, max=113, avg=23.81, stdev=18.40 00:18:20.971 lat (msec): min=5, max=113, avg=23.99, stdev=18.52 00:18:20.971 clat percentiles (msec): 00:18:20.971 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 12], 00:18:20.971 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 22], 00:18:20.971 | 70.00th=[ 24], 80.00th=[ 36], 90.00th=[ 46], 95.00th=[ 50], 00:18:20.971 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 113], 00:18:20.971 | 99.99th=[ 113] 00:18:20.971 bw ( KiB/s): min= 8192, max=16384, per=18.08%, avg=12288.00, stdev=5792.62, samples=2 00:18:20.971 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:18:20.971 lat (msec) : 4=0.02%, 10=4.50%, 20=64.84%, 50=27.71%, 100=2.17% 00:18:20.971 lat (msec) : 250=0.76% 00:18:20.971 cpu : usr=3.59%, sys=5.28%, ctx=324, majf=0, minf=1 00:18:20.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:20.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.971 issued rwts: total=3072,3109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.971 job2: (groupid=0, jobs=1): err= 0: pid=1107457: Sat Jul 13 15:29:51 2024 00:18:20.971 read: IOPS=4917, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1003msec) 00:18:20.971 slat (usec): min=3, max=5933, avg=97.57, stdev=508.30 00:18:20.971 clat (usec): min=872, max=23182, avg=12638.11, stdev=1769.58 00:18:20.971 lat (usec): min=3670, max=23187, avg=12735.69, stdev=1798.90 00:18:20.971 clat percentiles (usec): 00:18:20.971 | 1.00th=[ 7046], 5.00th=[10028], 10.00th=[10683], 20.00th=[11863], 00:18:20.971 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:18:20.971 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14877], 95.00th=[15795], 00:18:20.971 | 99.00th=[17171], 99.50th=[17695], 99.90th=[17957], 99.95th=[19268], 00:18:20.971 | 99.99th=[23200] 00:18:20.971 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:18:20.971 slat (usec): min=4, max=6499, avg=93.29, stdev=521.93 00:18:20.971 clat (usec): min=6584, max=20041, avg=12612.17, stdev=1473.69 00:18:20.971 lat (usec): min=6813, max=20076, avg=12705.46, stdev=1482.81 00:18:20.971 clat percentiles (usec): 00:18:20.971 | 1.00th=[ 7635], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[11863], 00:18:20.971 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:18:20.971 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[14615], 00:18:20.971 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[19006], 00:18:20.971 | 99.99th=[20055] 00:18:20.971 bw ( KiB/s): min=20480, max=20480, per=30.14%, avg=20480.00, stdev= 0.00, samples=2 00:18:20.971 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:18:20.971 lat (usec) : 1000=0.01% 00:18:20.971 lat (msec) : 4=0.30%, 10=5.04%, 20=94.63%, 50=0.02% 00:18:20.971 cpu : usr=4.39%, sys=9.58%, ctx=504, majf=0, minf=1 00:18:20.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:20.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.971 issued rwts: total=4932,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.971 job3: (groupid=0, jobs=1): err= 0: pid=1107458: Sat Jul 13 15:29:51 2024 00:18:20.971 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:18:20.971 slat (usec): min=2, max=21145, avg=162.72, stdev=1172.00 00:18:20.971 clat (usec): min=6268, max=56730, avg=19907.02, stdev=8434.02 00:18:20.971 lat (usec): min=6275, max=56745, avg=20069.74, stdev=8549.00 00:18:20.971 clat percentiles (usec): 00:18:20.971 | 1.00th=[ 7635], 5.00th=[12387], 10.00th=[13435], 20.00th=[14484], 00:18:20.971 | 30.00th=[15008], 40.00th=[16057], 50.00th=[17433], 60.00th=[18220], 00:18:20.971 | 70.00th=[20579], 80.00th=[23462], 90.00th=[33817], 95.00th=[37487], 00:18:20.971 | 99.00th=[48497], 99.50th=[50594], 99.90th=[54789], 99.95th=[54789], 00:18:20.971 | 99.99th=[56886] 00:18:20.971 write: IOPS=3307, BW=12.9MiB/s (13.5MB/s)(13.1MiB/1013msec); 0 zone resets 00:18:20.971 slat (usec): min=3, max=26464, avg=144.04, stdev=1014.19 00:18:20.971 clat (usec): min=3280, max=62619, avg=19800.11, stdev=9397.40 00:18:20.971 lat (usec): min=3286, max=62665, avg=19944.15, stdev=9456.85 00:18:20.971 clat percentiles (usec): 00:18:20.971 | 1.00th=[ 7373], 5.00th=[ 8848], 10.00th=[11731], 20.00th=[12911], 00:18:20.971 | 30.00th=[14222], 40.00th=[15795], 50.00th=[17695], 60.00th=[18220], 00:18:20.971 | 70.00th=[20841], 80.00th=[23987], 90.00th=[34866], 95.00th=[39060], 00:18:20.971 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[54789], 00:18:20.971 | 99.99th=[62653] 00:18:20.971 bw ( KiB/s): min=12312, max=13488, per=18.98%, avg=12900.00, stdev=831.56, samples=2 00:18:20.971 iops : min= 3078, max= 3372, avg=3225.00, stdev=207.89, samples=2 00:18:20.971 lat (msec) : 4=0.09%, 10=3.64%, 20=65.09%, 50=29.51%, 100=1.67% 00:18:20.971 cpu : usr=2.77%, sys=3.46%, ctx=265, majf=0, minf=1 00:18:20.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:20.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.971 issued rwts: total=3072,3350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.971 00:18:20.971 Run status group 0 (all jobs): 00:18:20.971 READ: bw=63.4MiB/s (66.5MB/s), 11.8MiB/s-20.8MiB/s (12.4MB/s-21.8MB/s), io=64.2MiB (67.3MB), run=1003-1013msec 00:18:20.971 WRITE: bw=66.4MiB/s (69.6MB/s), 12.1MiB/s-21.8MiB/s (12.7MB/s-22.9MB/s), io=67.2MiB (70.5MB), run=1003-1013msec 00:18:20.971 00:18:20.971 Disk stats (read/write): 00:18:20.971 nvme0n1: ios=4576/4608, merge=0/0, ticks=53775/50674, in_queue=104449, util=99.60% 00:18:20.971 nvme0n2: ios=2604/2794, merge=0/0, ticks=21169/30542, in_queue=51711, util=96.24% 00:18:20.971 nvme0n3: ios=4149/4337, merge=0/0, ticks=26635/25180, in_queue=51815, util=99.69% 00:18:20.971 nvme0n4: ios=2560/2850, merge=0/0, ticks=30824/33761, in_queue=64585, util=89.59% 00:18:20.971 15:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:20.971 15:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1107593 00:18:20.971 15:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:20.971 15:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:20.971 [global] 00:18:20.971 thread=1 00:18:20.971 invalidate=1 00:18:20.971 rw=read 00:18:20.971 time_based=1 00:18:20.971 runtime=10 00:18:20.971 ioengine=libaio 00:18:20.971 direct=1 00:18:20.971 bs=4096 00:18:20.971 iodepth=1 00:18:20.971 norandommap=1 00:18:20.971 numjobs=1 00:18:20.971 00:18:20.971 [job0] 00:18:20.971 filename=/dev/nvme0n1 00:18:20.971 [job1] 00:18:20.971 filename=/dev/nvme0n2 00:18:20.971 [job2] 00:18:20.971 filename=/dev/nvme0n3 00:18:20.971 [job3] 00:18:20.971 filename=/dev/nvme0n4 00:18:20.971 Could not set queue depth (nvme0n1) 00:18:20.971 Could not set queue depth (nvme0n2) 00:18:20.971 Could not set queue depth (nvme0n3) 00:18:20.971 Could not set queue depth (nvme0n4) 00:18:20.971 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.971 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.971 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.971 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.971 fio-3.35 00:18:20.971 Starting 4 threads 00:18:24.250 15:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:24.250 15:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:24.250 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6283264, buflen=4096 00:18:24.250 fio: pid=1107686, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:24.250 15:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.250 15:29:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:24.250 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6455296, buflen=4096 00:18:24.250 fio: pid=1107685, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:24.508 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=4427776, buflen=4096 00:18:24.508 fio: pid=1107683, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:24.508 15:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.508 15:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:24.766 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16621568, buflen=4096 00:18:24.766 fio: pid=1107684, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:24.766 15:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.766 15:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:25.025 00:18:25.025 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1107683: Sat Jul 13 15:29:55 2024 00:18:25.025 read: IOPS=316, BW=1264KiB/s (1294kB/s)(4324KiB/3422msec) 00:18:25.025 slat (usec): min=5, max=7916, avg=30.14, stdev=324.23 00:18:25.025 clat (usec): min=277, max=42226, avg=3131.07, stdev=10336.04 00:18:25.025 lat (usec): min=284, max=49068, avg=3154.59, stdev=10367.18 00:18:25.025 clat percentiles (usec): 00:18:25.025 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 326], 00:18:25.025 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:18:25.025 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[ 482], 95.00th=[41157], 00:18:25.025 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:25.025 | 99.99th=[42206] 00:18:25.025 bw ( KiB/s): min= 96, max= 5560, per=16.01%, avg=1428.00, stdev=2262.77, samples=6 00:18:25.025 iops : min= 24, max= 1390, avg=357.00, stdev=565.69, samples=6 00:18:25.025 lat (usec) : 500=90.48%, 750=2.50%, 1000=0.09% 00:18:25.025 lat (msec) : 2=0.09%, 50=6.75% 00:18:25.025 cpu : usr=0.29%, sys=1.02%, ctx=1083, majf=0, minf=1 00:18:25.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.025 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 issued rwts: total=1082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.026 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1107684: Sat Jul 13 15:29:55 2024 00:18:25.026 read: IOPS=1097, BW=4388KiB/s (4494kB/s)(15.9MiB/3699msec) 00:18:25.026 slat (usec): min=4, max=39872, avg=45.94, stdev=800.57 00:18:25.026 clat (usec): min=322, max=42052, avg=855.24, stdev=4186.84 00:18:25.026 lat (usec): min=335, max=42086, avg=901.18, stdev=4260.98 00:18:25.026 clat percentiles (usec): 00:18:25.026 | 1.00th=[ 343], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 379], 00:18:25.026 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 420], 60.00th=[ 429], 00:18:25.026 | 70.00th=[ 437], 80.00th=[ 469], 90.00th=[ 498], 95.00th=[ 506], 00:18:25.026 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:25.026 | 99.99th=[42206] 00:18:25.026 bw ( KiB/s): min= 96, max= 9184, per=47.42%, avg=4230.43, stdev=4203.60, samples=7 00:18:25.026 iops : min= 24, max= 2296, avg=1057.57, stdev=1050.87, samples=7 00:18:25.026 lat (usec) : 500=92.24%, 750=6.60%, 1000=0.02% 00:18:25.026 lat (msec) : 4=0.02%, 10=0.02%, 50=1.06% 00:18:25.026 cpu : usr=1.11%, sys=2.14%, ctx=4066, majf=0, minf=1 00:18:25.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 issued rwts: total=4059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.026 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1107685: Sat Jul 13 15:29:55 2024 00:18:25.026 read: IOPS=493, BW=1973KiB/s (2020kB/s)(6304KiB/3195msec) 00:18:25.026 slat (nsec): min=5850, max=52001, avg=14829.94, stdev=5223.84 00:18:25.026 clat (usec): min=348, max=45002, avg=1993.96, stdev=7923.78 00:18:25.026 lat (usec): min=357, max=45018, avg=2008.79, stdev=7925.21 00:18:25.026 clat percentiles (usec): 00:18:25.026 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 383], 00:18:25.026 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 408], 00:18:25.026 | 70.00th=[ 412], 80.00th=[ 420], 90.00th=[ 437], 95.00th=[ 510], 00:18:25.026 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[44827], 00:18:25.026 | 99.99th=[44827] 00:18:25.026 bw ( KiB/s): min= 96, max= 7984, per=23.47%, avg=2094.67, stdev=3319.50, samples=6 00:18:25.026 iops : min= 24, max= 1996, avg=523.67, stdev=829.87, samples=6 00:18:25.026 lat (usec) : 500=93.91%, 750=2.16% 00:18:25.026 lat (msec) : 50=3.87% 00:18:25.026 cpu : usr=0.50%, sys=1.10%, ctx=1577, majf=0, minf=1 00:18:25.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 issued rwts: total=1577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.026 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1107686: Sat Jul 13 15:29:55 2024 00:18:25.026 read: IOPS=526, BW=2104KiB/s (2154kB/s)(6136KiB/2917msec) 00:18:25.026 slat (nsec): min=4533, max=77834, avg=20514.29, stdev=10959.40 00:18:25.026 clat (usec): min=278, max=42165, avg=1861.03, stdev=7778.21 00:18:25.026 lat (usec): min=284, max=42243, avg=1881.54, stdev=7779.35 00:18:25.026 clat percentiles (usec): 00:18:25.026 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:18:25.026 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 343], 00:18:25.026 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 424], 00:18:25.026 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:25.026 | 99.99th=[42206] 00:18:25.026 bw ( KiB/s): min= 96, max= 6088, per=27.33%, avg=2438.40, stdev=3208.90, samples=5 00:18:25.026 iops : min= 24, max= 1522, avg=609.60, stdev=802.23, samples=5 00:18:25.026 lat (usec) : 500=95.70%, 750=0.52% 00:18:25.026 lat (msec) : 50=3.71% 00:18:25.026 cpu : usr=0.34%, sys=1.34%, ctx=1537, majf=0, minf=1 00:18:25.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.026 issued rwts: total=1535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.026 00:18:25.026 Run status group 0 (all jobs): 00:18:25.026 READ: bw=8920KiB/s (9134kB/s), 1264KiB/s-4388KiB/s (1294kB/s-4494kB/s), io=32.2MiB (33.8MB), run=2917-3699msec 00:18:25.026 00:18:25.026 Disk stats (read/write): 00:18:25.026 nvme0n1: ios=1079/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.42% 00:18:25.026 nvme0n2: ios=3900/0, merge=0/0, ticks=4289/0, in_queue=4289, util=96.75% 00:18:25.026 nvme0n3: ios=1574/0, merge=0/0, ticks=3042/0, in_queue=3042, util=96.75% 00:18:25.026 nvme0n4: ios=1580/0, merge=0/0, ticks=2943/0, in_queue=2943, util=99.56% 00:18:25.284 15:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:25.284 15:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:25.284 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:25.284 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:25.542 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:25.542 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:26.109 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:26.109 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:26.109 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:26.109 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1107593 00:18:26.109 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:26.109 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:26.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:26.398 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:26.398 nvmf hotplug test: fio failed as expected 00:18:26.399 15:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.658 rmmod nvme_tcp 00:18:26.658 rmmod nvme_fabrics 00:18:26.658 rmmod nvme_keyring 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1105667 ']' 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1105667 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1105667 ']' 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1105667 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105667 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105667' 00:18:26.658 killing process with pid 1105667 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1105667 00:18:26.658 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1105667 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.917 15:29:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.450 15:29:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:29.450 00:18:29.450 real 0m23.373s 00:18:29.450 user 1m23.174s 00:18:29.450 sys 0m5.731s 00:18:29.450 15:29:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:29.450 15:29:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.450 ************************************ 00:18:29.450 END TEST nvmf_fio_target 00:18:29.450 ************************************ 00:18:29.450 15:29:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:29.450 15:29:59 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:29.450 15:29:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:29.450 15:29:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.450 15:29:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:29.450 ************************************ 00:18:29.450 START TEST nvmf_bdevio 00:18:29.450 ************************************ 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:29.450 * Looking for test storage... 00:18:29.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.450 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.451 15:29:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.451 15:29:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.451 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:29.451 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:29.451 15:29:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:29.451 15:29:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.347 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.347 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.347 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.347 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.347 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:18:31.347 00:18:31.348 --- 10.0.0.2 ping statistics --- 00:18:31.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.348 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:18:31.348 00:18:31.348 --- 10.0.0.1 ping statistics --- 00:18:31.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.348 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1110404 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1110404 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1110404 ']' 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.348 15:30:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.348 [2024-07-13 15:30:01.811890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:31.348 [2024-07-13 15:30:01.811962] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.348 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.348 [2024-07-13 15:30:01.848639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:31.348 [2024-07-13 15:30:01.881520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.348 [2024-07-13 15:30:01.980855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.348 [2024-07-13 15:30:01.980936] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.348 [2024-07-13 15:30:01.980954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.348 [2024-07-13 15:30:01.980967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.348 [2024-07-13 15:30:01.980979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.348 [2024-07-13 15:30:01.981037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:31.348 [2024-07-13 15:30:01.981094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:31.348 [2024-07-13 15:30:01.981149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:31.348 [2024-07-13 15:30:01.981151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.348 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.605 [2024-07-13 15:30:02.137617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.605 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.606 Malloc0 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.606 [2024-07-13 15:30:02.190511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:31.606 { 00:18:31.606 "params": { 00:18:31.606 "name": "Nvme$subsystem", 00:18:31.606 "trtype": "$TEST_TRANSPORT", 00:18:31.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:31.606 "adrfam": "ipv4", 00:18:31.606 "trsvcid": "$NVMF_PORT", 00:18:31.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:31.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:31.606 "hdgst": ${hdgst:-false}, 00:18:31.606 "ddgst": ${ddgst:-false} 00:18:31.606 }, 00:18:31.606 "method": "bdev_nvme_attach_controller" 00:18:31.606 } 00:18:31.606 EOF 00:18:31.606 )") 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:31.606 15:30:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:31.606 "params": { 00:18:31.606 "name": "Nvme1", 00:18:31.606 "trtype": "tcp", 00:18:31.606 "traddr": "10.0.0.2", 00:18:31.606 "adrfam": "ipv4", 00:18:31.606 "trsvcid": "4420", 00:18:31.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.606 "hdgst": false, 00:18:31.606 "ddgst": false 00:18:31.606 }, 00:18:31.606 "method": "bdev_nvme_attach_controller" 00:18:31.606 }' 00:18:31.606 [2024-07-13 15:30:02.236700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:18:31.606 [2024-07-13 15:30:02.236767] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110551 ] 00:18:31.606 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.606 [2024-07-13 15:30:02.269842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:31.606 [2024-07-13 15:30:02.300135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.863 [2024-07-13 15:30:02.393726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.863 [2024-07-13 15:30:02.393777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.863 [2024-07-13 15:30:02.393780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.863 I/O targets: 00:18:31.863 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:31.863 00:18:31.863 00:18:31.863 CUnit - A unit testing framework for C - Version 2.1-3 00:18:31.863 http://cunit.sourceforge.net/ 00:18:31.863 00:18:31.863 00:18:31.863 Suite: bdevio tests on: Nvme1n1 00:18:32.120 Test: blockdev write read block ...passed 00:18:32.120 Test: blockdev write zeroes read block ...passed 00:18:32.120 Test: blockdev write zeroes read no split ...passed 00:18:32.120 Test: blockdev write zeroes read split ...passed 00:18:32.120 Test: blockdev write zeroes read split partial ...passed 00:18:32.120 Test: blockdev reset ...[2024-07-13 15:30:02.817796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:32.120 [2024-07-13 15:30:02.817909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf6940 (9): Bad file descriptor 00:18:32.120 [2024-07-13 15:30:02.872641] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:32.120 passed 00:18:32.120 Test: blockdev write read 8 blocks ...passed 00:18:32.120 Test: blockdev write read size > 128k ...passed 00:18:32.120 Test: blockdev write read invalid size ...passed 00:18:32.375 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:32.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:32.376 Test: blockdev write read max offset ...passed 00:18:32.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:32.376 Test: blockdev writev readv 8 blocks ...passed 00:18:32.376 Test: blockdev writev readv 30 x 1block ...passed 00:18:32.376 Test: blockdev writev readv block ...passed 00:18:32.376 Test: blockdev writev readv size > 128k ...passed 00:18:32.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:32.376 Test: blockdev comparev and writev ...[2024-07-13 15:30:03.047468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.047511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.047536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.047554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.047941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.047966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.047988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.048004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.048382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.048406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.048427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.048444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.048815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.048838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.048859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:32.376 [2024-07-13 15:30:03.048883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:32.376 passed 00:18:32.376 Test: blockdev nvme passthru rw ...passed 00:18:32.376 Test: blockdev nvme passthru vendor specific ...[2024-07-13 15:30:03.131226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.376 [2024-07-13 15:30:03.131253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.131465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.376 [2024-07-13 15:30:03.131488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.131692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.376 [2024-07-13 15:30:03.131715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:32.376 [2024-07-13 15:30:03.131920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.376 [2024-07-13 15:30:03.131944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:32.376 passed 00:18:32.632 Test: blockdev nvme admin passthru ...passed 00:18:32.632 Test: blockdev copy ...passed 00:18:32.632 00:18:32.632 Run Summary: Type Total Ran Passed Failed Inactive 00:18:32.632 suites 1 1 n/a 0 0 00:18:32.632 tests 23 23 23 0 0 00:18:32.632 asserts 152 152 152 0 n/a 00:18:32.632 00:18:32.632 Elapsed time = 1.149 seconds 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.632 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.632 rmmod nvme_tcp 00:18:32.889 rmmod nvme_fabrics 00:18:32.889 rmmod nvme_keyring 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1110404 ']' 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1110404 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1110404 ']' 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1110404 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110404 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110404' 00:18:32.889 killing process with pid 1110404 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1110404 00:18:32.889 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1110404 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.146 15:30:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.046 15:30:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.046 00:18:35.046 real 0m6.120s 00:18:35.046 user 0m9.768s 00:18:35.046 sys 0m2.006s 00:18:35.046 15:30:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.046 15:30:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.046 ************************************ 00:18:35.046 END TEST nvmf_bdevio 00:18:35.046 ************************************ 00:18:35.046 15:30:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:35.046 15:30:05 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:35.046 15:30:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:35.046 15:30:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.046 15:30:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.304 ************************************ 00:18:35.304 START TEST nvmf_auth_target 00:18:35.304 ************************************ 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:35.304 * Looking for test storage... 00:18:35.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.304 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.305 15:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:37.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.208 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:37.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:37.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:37.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.209 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:18:37.468 00:18:37.468 --- 10.0.0.2 ping statistics --- 00:18:37.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.468 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:18:37.468 00:18:37.468 --- 10.0.0.1 ping statistics --- 00:18:37.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.468 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.468 15:30:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1112936 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1112936 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1112936 ']' 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.468 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1113008 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d58f918e431bc2b72f48f0802b2bf1e66e4680dcb3f929d6 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6ik 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d58f918e431bc2b72f48f0802b2bf1e66e4680dcb3f929d6 0 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d58f918e431bc2b72f48f0802b2bf1e66e4680dcb3f929d6 0 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.727 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d58f918e431bc2b72f48f0802b2bf1e66e4680dcb3f929d6 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6ik 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6ik 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.6ik 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cc25bc9a068ee04004ad580b78991e2a80e8900bfab1fcf5ba439f8f487cf12b 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Frt 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cc25bc9a068ee04004ad580b78991e2a80e8900bfab1fcf5ba439f8f487cf12b 3 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cc25bc9a068ee04004ad580b78991e2a80e8900bfab1fcf5ba439f8f487cf12b 3 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cc25bc9a068ee04004ad580b78991e2a80e8900bfab1fcf5ba439f8f487cf12b 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Frt 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Frt 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Frt 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6660bdb09d3e064f6e8617ee0859f632 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hDX 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6660bdb09d3e064f6e8617ee0859f632 1 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6660bdb09d3e064f6e8617ee0859f632 1 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6660bdb09d3e064f6e8617ee0859f632 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:37.728 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hDX 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hDX 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.hDX 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=26ce4d5c07081f962d6b5bf54ee4aca819383a2fa783b4f1 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4Vx 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 26ce4d5c07081f962d6b5bf54ee4aca819383a2fa783b4f1 2 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 26ce4d5c07081f962d6b5bf54ee4aca819383a2fa783b4f1 2 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=26ce4d5c07081f962d6b5bf54ee4aca819383a2fa783b4f1 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4Vx 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4Vx 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4Vx 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48ca3c960c34f089d65b611045d7fcce0a5e618ee49358cb 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wMS 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48ca3c960c34f089d65b611045d7fcce0a5e618ee49358cb 2 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48ca3c960c34f089d65b611045d7fcce0a5e618ee49358cb 2 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48ca3c960c34f089d65b611045d7fcce0a5e618ee49358cb 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wMS 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wMS 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.wMS 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:37.987 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0957d0b480fdc688795f75e3a8cfaf8c 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SQj 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0957d0b480fdc688795f75e3a8cfaf8c 1 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0957d0b480fdc688795f75e3a8cfaf8c 1 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0957d0b480fdc688795f75e3a8cfaf8c 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SQj 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SQj 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.SQj 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9503dc552a0fb490793afc97209bcf2aac34cb7b98da397b79a4941532d696e7 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oUs 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9503dc552a0fb490793afc97209bcf2aac34cb7b98da397b79a4941532d696e7 3 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9503dc552a0fb490793afc97209bcf2aac34cb7b98da397b79a4941532d696e7 3 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9503dc552a0fb490793afc97209bcf2aac34cb7b98da397b79a4941532d696e7 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oUs 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oUs 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.oUs 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1112936 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1112936 ']' 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.988 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1113008 /var/tmp/host.sock 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1113008 ']' 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:38.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.247 15:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.504 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.504 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:38.504 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:38.504 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.504 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6ik 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6ik 00:18:38.762 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6ik 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Frt ]] 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Frt 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Frt 00:18:39.020 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Frt 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hDX 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hDX 00:18:39.279 15:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hDX 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4Vx ]] 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Vx 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Vx 00:18:39.537 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Vx 00:18:39.794 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:39.794 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wMS 00:18:39.795 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.795 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.795 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.795 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wMS 00:18:39.795 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wMS 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.SQj ]] 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SQj 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SQj 00:18:40.052 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SQj 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oUs 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oUs 00:18:40.310 15:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oUs 00:18:40.568 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:40.568 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:40.568 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.568 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.568 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.568 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.826 15:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.827 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.827 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.085 00:18:41.085 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.085 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.086 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.344 { 00:18:41.344 "cntlid": 1, 00:18:41.344 "qid": 0, 00:18:41.344 "state": "enabled", 00:18:41.344 "thread": "nvmf_tgt_poll_group_000", 00:18:41.344 "listen_address": { 00:18:41.344 "trtype": "TCP", 00:18:41.344 "adrfam": "IPv4", 00:18:41.344 "traddr": "10.0.0.2", 00:18:41.344 "trsvcid": "4420" 00:18:41.344 }, 00:18:41.344 "peer_address": { 00:18:41.344 "trtype": "TCP", 00:18:41.344 "adrfam": "IPv4", 00:18:41.344 "traddr": "10.0.0.1", 00:18:41.344 "trsvcid": "57132" 00:18:41.344 }, 00:18:41.344 "auth": { 00:18:41.344 "state": "completed", 00:18:41.344 "digest": "sha256", 00:18:41.344 "dhgroup": "null" 00:18:41.344 } 00:18:41.344 } 00:18:41.344 ]' 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.344 15:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.344 15:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.344 15:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.344 15:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.344 15:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.344 15:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.631 15:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.565 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.822 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.079 00:18:43.079 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.079 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.079 15:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.337 { 00:18:43.337 "cntlid": 3, 00:18:43.337 "qid": 0, 00:18:43.337 "state": "enabled", 00:18:43.337 "thread": "nvmf_tgt_poll_group_000", 00:18:43.337 "listen_address": { 00:18:43.337 "trtype": "TCP", 00:18:43.337 "adrfam": "IPv4", 00:18:43.337 "traddr": "10.0.0.2", 00:18:43.337 "trsvcid": "4420" 00:18:43.337 }, 00:18:43.337 "peer_address": { 00:18:43.337 "trtype": "TCP", 00:18:43.337 "adrfam": "IPv4", 00:18:43.337 "traddr": "10.0.0.1", 00:18:43.337 "trsvcid": "57174" 00:18:43.337 }, 00:18:43.337 "auth": { 00:18:43.337 "state": "completed", 00:18:43.337 "digest": "sha256", 00:18:43.337 "dhgroup": "null" 00:18:43.337 } 00:18:43.337 } 00:18:43.337 ]' 00:18:43.337 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.595 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.854 15:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.790 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.049 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.306 00:18:45.306 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.306 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.306 15:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.563 { 00:18:45.563 "cntlid": 5, 00:18:45.563 "qid": 0, 00:18:45.563 "state": "enabled", 00:18:45.563 "thread": "nvmf_tgt_poll_group_000", 00:18:45.563 "listen_address": { 00:18:45.563 "trtype": "TCP", 00:18:45.563 "adrfam": "IPv4", 00:18:45.563 "traddr": "10.0.0.2", 00:18:45.563 "trsvcid": "4420" 00:18:45.563 }, 00:18:45.563 "peer_address": { 00:18:45.563 "trtype": "TCP", 00:18:45.563 "adrfam": "IPv4", 00:18:45.563 "traddr": "10.0.0.1", 00:18:45.563 "trsvcid": "57194" 00:18:45.563 }, 00:18:45.563 "auth": { 00:18:45.563 "state": "completed", 00:18:45.563 "digest": "sha256", 00:18:45.563 "dhgroup": "null" 00:18:45.563 } 00:18:45.563 } 00:18:45.563 ]' 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.563 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.822 15:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.759 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.017 15:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.275 15:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.275 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.275 15:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.532 00:18:47.532 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.533 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.533 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.791 { 00:18:47.791 "cntlid": 7, 00:18:47.791 "qid": 0, 00:18:47.791 "state": "enabled", 00:18:47.791 "thread": "nvmf_tgt_poll_group_000", 00:18:47.791 "listen_address": { 00:18:47.791 "trtype": "TCP", 00:18:47.791 "adrfam": "IPv4", 00:18:47.791 "traddr": "10.0.0.2", 00:18:47.791 "trsvcid": "4420" 00:18:47.791 }, 00:18:47.791 "peer_address": { 00:18:47.791 "trtype": "TCP", 00:18:47.791 "adrfam": "IPv4", 00:18:47.791 "traddr": "10.0.0.1", 00:18:47.791 "trsvcid": "35262" 00:18:47.791 }, 00:18:47.791 "auth": { 00:18:47.791 "state": "completed", 00:18:47.791 "digest": "sha256", 00:18:47.791 "dhgroup": "null" 00:18:47.791 } 00:18:47.791 } 00:18:47.791 ]' 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.791 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.792 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.792 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.792 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.792 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.050 15:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.982 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.239 15:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.240 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.240 15:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.498 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.755 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.755 { 00:18:49.755 "cntlid": 9, 00:18:49.755 "qid": 0, 00:18:49.755 "state": "enabled", 00:18:49.755 "thread": "nvmf_tgt_poll_group_000", 00:18:49.755 "listen_address": { 00:18:49.755 "trtype": "TCP", 00:18:49.755 "adrfam": "IPv4", 00:18:49.755 "traddr": "10.0.0.2", 00:18:49.755 "trsvcid": "4420" 00:18:49.755 }, 00:18:49.755 "peer_address": { 00:18:49.755 "trtype": "TCP", 00:18:49.755 "adrfam": "IPv4", 00:18:49.755 "traddr": "10.0.0.1", 00:18:49.755 "trsvcid": "35296" 00:18:49.755 }, 00:18:49.755 "auth": { 00:18:49.755 "state": "completed", 00:18:49.755 "digest": "sha256", 00:18:49.755 "dhgroup": "ffdhe2048" 00:18:49.755 } 00:18:49.755 } 00:18:49.755 ]' 00:18:50.012 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.012 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.012 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.012 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.012 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.012 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.013 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.013 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.270 15:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.201 15:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.459 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.717 00:18:51.717 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.717 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.717 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.975 { 00:18:51.975 "cntlid": 11, 00:18:51.975 "qid": 0, 00:18:51.975 "state": "enabled", 00:18:51.975 "thread": "nvmf_tgt_poll_group_000", 00:18:51.975 "listen_address": { 00:18:51.975 "trtype": "TCP", 00:18:51.975 "adrfam": "IPv4", 00:18:51.975 "traddr": "10.0.0.2", 00:18:51.975 "trsvcid": "4420" 00:18:51.975 }, 00:18:51.975 "peer_address": { 00:18:51.975 "trtype": "TCP", 00:18:51.975 "adrfam": "IPv4", 00:18:51.975 "traddr": "10.0.0.1", 00:18:51.975 "trsvcid": "35320" 00:18:51.975 }, 00:18:51.975 "auth": { 00:18:51.975 "state": "completed", 00:18:51.975 "digest": "sha256", 00:18:51.975 "dhgroup": "ffdhe2048" 00:18:51.975 } 00:18:51.975 } 00:18:51.975 ]' 00:18:51.975 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.233 15:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.489 15:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:18:53.419 15:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.419 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.675 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.933 00:18:53.933 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.933 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.933 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.190 { 00:18:54.190 "cntlid": 13, 00:18:54.190 "qid": 0, 00:18:54.190 "state": "enabled", 00:18:54.190 "thread": "nvmf_tgt_poll_group_000", 00:18:54.190 "listen_address": { 00:18:54.190 "trtype": "TCP", 00:18:54.190 "adrfam": "IPv4", 00:18:54.190 "traddr": "10.0.0.2", 00:18:54.190 "trsvcid": "4420" 00:18:54.190 }, 00:18:54.190 "peer_address": { 00:18:54.190 "trtype": "TCP", 00:18:54.190 "adrfam": "IPv4", 00:18:54.190 "traddr": "10.0.0.1", 00:18:54.190 "trsvcid": "35354" 00:18:54.190 }, 00:18:54.190 "auth": { 00:18:54.190 "state": "completed", 00:18:54.190 "digest": "sha256", 00:18:54.190 "dhgroup": "ffdhe2048" 00:18:54.190 } 00:18:54.190 } 00:18:54.190 ]' 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.190 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.448 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.448 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.448 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.448 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.448 15:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.705 15:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.686 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.943 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:56.201 00:18:56.201 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.201 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.201 15:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.458 { 00:18:56.458 "cntlid": 15, 00:18:56.458 "qid": 0, 00:18:56.458 "state": "enabled", 00:18:56.458 "thread": "nvmf_tgt_poll_group_000", 00:18:56.458 "listen_address": { 00:18:56.458 "trtype": "TCP", 00:18:56.458 "adrfam": "IPv4", 00:18:56.458 "traddr": "10.0.0.2", 00:18:56.458 "trsvcid": "4420" 00:18:56.458 }, 00:18:56.458 "peer_address": { 00:18:56.458 "trtype": "TCP", 00:18:56.458 "adrfam": "IPv4", 00:18:56.458 "traddr": "10.0.0.1", 00:18:56.458 "trsvcid": "35382" 00:18:56.458 }, 00:18:56.458 "auth": { 00:18:56.458 "state": "completed", 00:18:56.458 "digest": "sha256", 00:18:56.458 "dhgroup": "ffdhe2048" 00:18:56.458 } 00:18:56.458 } 00:18:56.458 ]' 00:18:56.458 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.715 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.972 15:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.904 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.162 15:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.421 00:18:58.678 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.678 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.678 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.936 { 00:18:58.936 "cntlid": 17, 00:18:58.936 "qid": 0, 00:18:58.936 "state": "enabled", 00:18:58.936 "thread": "nvmf_tgt_poll_group_000", 00:18:58.936 "listen_address": { 00:18:58.936 "trtype": "TCP", 00:18:58.936 "adrfam": "IPv4", 00:18:58.936 "traddr": "10.0.0.2", 00:18:58.936 "trsvcid": "4420" 00:18:58.936 }, 00:18:58.936 "peer_address": { 00:18:58.936 "trtype": "TCP", 00:18:58.936 "adrfam": "IPv4", 00:18:58.936 "traddr": "10.0.0.1", 00:18:58.936 "trsvcid": "46112" 00:18:58.936 }, 00:18:58.936 "auth": { 00:18:58.936 "state": "completed", 00:18:58.936 "digest": "sha256", 00:18:58.936 "dhgroup": "ffdhe3072" 00:18:58.936 } 00:18:58.936 } 00:18:58.936 ]' 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.936 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.193 15:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.123 15:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.380 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.944 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.944 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.944 { 00:19:00.944 "cntlid": 19, 00:19:00.944 "qid": 0, 00:19:00.944 "state": "enabled", 00:19:00.944 "thread": "nvmf_tgt_poll_group_000", 00:19:00.944 "listen_address": { 00:19:00.944 "trtype": "TCP", 00:19:00.944 "adrfam": "IPv4", 00:19:00.944 "traddr": "10.0.0.2", 00:19:00.944 "trsvcid": "4420" 00:19:00.944 }, 00:19:00.944 "peer_address": { 00:19:00.944 "trtype": "TCP", 00:19:00.944 "adrfam": "IPv4", 00:19:00.944 "traddr": "10.0.0.1", 00:19:00.944 "trsvcid": "46146" 00:19:00.944 }, 00:19:00.944 "auth": { 00:19:00.944 "state": "completed", 00:19:00.944 "digest": "sha256", 00:19:00.944 "dhgroup": "ffdhe3072" 00:19:00.944 } 00:19:00.944 } 00:19:00.944 ]' 00:19:00.945 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.201 15:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.458 15:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.389 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.646 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.903 00:19:02.903 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.903 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.903 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.160 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.161 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.161 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.161 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.161 15:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.161 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.161 { 00:19:03.161 "cntlid": 21, 00:19:03.161 "qid": 0, 00:19:03.161 "state": "enabled", 00:19:03.161 "thread": "nvmf_tgt_poll_group_000", 00:19:03.161 "listen_address": { 00:19:03.161 "trtype": "TCP", 00:19:03.161 "adrfam": "IPv4", 00:19:03.161 "traddr": "10.0.0.2", 00:19:03.161 "trsvcid": "4420" 00:19:03.161 }, 00:19:03.161 "peer_address": { 00:19:03.161 "trtype": "TCP", 00:19:03.161 "adrfam": "IPv4", 00:19:03.161 "traddr": "10.0.0.1", 00:19:03.161 "trsvcid": "46158" 00:19:03.161 }, 00:19:03.161 "auth": { 00:19:03.161 "state": "completed", 00:19:03.161 "digest": "sha256", 00:19:03.161 "dhgroup": "ffdhe3072" 00:19:03.161 } 00:19:03.161 } 00:19:03.161 ]' 00:19:03.161 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.417 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.417 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.417 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.417 15:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.417 15:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.417 15:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.417 15:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.674 15:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.606 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.864 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.429 00:19:05.429 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.429 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.429 15:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.429 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.429 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.429 15:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.429 15:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.686 15:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.687 { 00:19:05.687 "cntlid": 23, 00:19:05.687 "qid": 0, 00:19:05.687 "state": "enabled", 00:19:05.687 "thread": "nvmf_tgt_poll_group_000", 00:19:05.687 "listen_address": { 00:19:05.687 "trtype": "TCP", 00:19:05.687 "adrfam": "IPv4", 00:19:05.687 "traddr": "10.0.0.2", 00:19:05.687 "trsvcid": "4420" 00:19:05.687 }, 00:19:05.687 "peer_address": { 00:19:05.687 "trtype": "TCP", 00:19:05.687 "adrfam": "IPv4", 00:19:05.687 "traddr": "10.0.0.1", 00:19:05.687 "trsvcid": "46200" 00:19:05.687 }, 00:19:05.687 "auth": { 00:19:05.687 "state": "completed", 00:19:05.687 "digest": "sha256", 00:19:05.687 "dhgroup": "ffdhe3072" 00:19:05.687 } 00:19:05.687 } 00:19:05.687 ]' 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.687 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.944 15:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.876 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.134 15:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.391 00:19:07.649 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.649 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.649 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.906 { 00:19:07.906 "cntlid": 25, 00:19:07.906 "qid": 0, 00:19:07.906 "state": "enabled", 00:19:07.906 "thread": "nvmf_tgt_poll_group_000", 00:19:07.906 "listen_address": { 00:19:07.906 "trtype": "TCP", 00:19:07.906 "adrfam": "IPv4", 00:19:07.906 "traddr": "10.0.0.2", 00:19:07.906 "trsvcid": "4420" 00:19:07.906 }, 00:19:07.906 "peer_address": { 00:19:07.906 "trtype": "TCP", 00:19:07.906 "adrfam": "IPv4", 00:19:07.906 "traddr": "10.0.0.1", 00:19:07.906 "trsvcid": "37004" 00:19:07.906 }, 00:19:07.906 "auth": { 00:19:07.906 "state": "completed", 00:19:07.906 "digest": "sha256", 00:19:07.906 "dhgroup": "ffdhe4096" 00:19:07.906 } 00:19:07.906 } 00:19:07.906 ]' 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.906 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.164 15:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.160 15:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.418 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.675 00:19:09.932 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.932 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.932 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.189 { 00:19:10.189 "cntlid": 27, 00:19:10.189 "qid": 0, 00:19:10.189 "state": "enabled", 00:19:10.189 "thread": "nvmf_tgt_poll_group_000", 00:19:10.189 "listen_address": { 00:19:10.189 "trtype": "TCP", 00:19:10.189 "adrfam": "IPv4", 00:19:10.189 "traddr": "10.0.0.2", 00:19:10.189 "trsvcid": "4420" 00:19:10.189 }, 00:19:10.189 "peer_address": { 00:19:10.189 "trtype": "TCP", 00:19:10.189 "adrfam": "IPv4", 00:19:10.189 "traddr": "10.0.0.1", 00:19:10.189 "trsvcid": "37026" 00:19:10.189 }, 00:19:10.189 "auth": { 00:19:10.189 "state": "completed", 00:19:10.189 "digest": "sha256", 00:19:10.189 "dhgroup": "ffdhe4096" 00:19:10.189 } 00:19:10.189 } 00:19:10.189 ]' 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.189 15:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.446 15:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.377 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.634 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.205 00:19:12.205 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.205 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.205 15:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.469 { 00:19:12.469 "cntlid": 29, 00:19:12.469 "qid": 0, 00:19:12.469 "state": "enabled", 00:19:12.469 "thread": "nvmf_tgt_poll_group_000", 00:19:12.469 "listen_address": { 00:19:12.469 "trtype": "TCP", 00:19:12.469 "adrfam": "IPv4", 00:19:12.469 "traddr": "10.0.0.2", 00:19:12.469 "trsvcid": "4420" 00:19:12.469 }, 00:19:12.469 "peer_address": { 00:19:12.469 "trtype": "TCP", 00:19:12.469 "adrfam": "IPv4", 00:19:12.469 "traddr": "10.0.0.1", 00:19:12.469 "trsvcid": "37060" 00:19:12.469 }, 00:19:12.469 "auth": { 00:19:12.469 "state": "completed", 00:19:12.469 "digest": "sha256", 00:19:12.469 "dhgroup": "ffdhe4096" 00:19:12.469 } 00:19:12.469 } 00:19:12.469 ]' 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.469 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.725 15:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:13.656 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.221 15:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.478 00:19:14.478 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.478 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.478 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.734 { 00:19:14.734 "cntlid": 31, 00:19:14.734 "qid": 0, 00:19:14.734 "state": "enabled", 00:19:14.734 "thread": "nvmf_tgt_poll_group_000", 00:19:14.734 "listen_address": { 00:19:14.734 "trtype": "TCP", 00:19:14.734 "adrfam": "IPv4", 00:19:14.734 "traddr": "10.0.0.2", 00:19:14.734 "trsvcid": "4420" 00:19:14.734 }, 00:19:14.734 "peer_address": { 00:19:14.734 "trtype": "TCP", 00:19:14.734 "adrfam": "IPv4", 00:19:14.734 "traddr": "10.0.0.1", 00:19:14.734 "trsvcid": "37072" 00:19:14.734 }, 00:19:14.734 "auth": { 00:19:14.734 "state": "completed", 00:19:14.734 "digest": "sha256", 00:19:14.734 "dhgroup": "ffdhe4096" 00:19:14.734 } 00:19:14.734 } 00:19:14.734 ]' 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.734 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.991 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.991 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.991 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.248 15:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.178 15:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.179 15:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.179 15:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.435 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.001 00:19:17.001 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.001 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.001 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.258 { 00:19:17.258 "cntlid": 33, 00:19:17.258 "qid": 0, 00:19:17.258 "state": "enabled", 00:19:17.258 "thread": "nvmf_tgt_poll_group_000", 00:19:17.258 "listen_address": { 00:19:17.258 "trtype": "TCP", 00:19:17.258 "adrfam": "IPv4", 00:19:17.258 "traddr": "10.0.0.2", 00:19:17.258 "trsvcid": "4420" 00:19:17.258 }, 00:19:17.258 "peer_address": { 00:19:17.258 "trtype": "TCP", 00:19:17.258 "adrfam": "IPv4", 00:19:17.258 "traddr": "10.0.0.1", 00:19:17.258 "trsvcid": "37102" 00:19:17.258 }, 00:19:17.258 "auth": { 00:19:17.258 "state": "completed", 00:19:17.258 "digest": "sha256", 00:19:17.258 "dhgroup": "ffdhe6144" 00:19:17.258 } 00:19:17.258 } 00:19:17.258 ]' 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.258 15:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.515 15:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.447 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.705 15:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.272 00:19:19.272 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.272 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.272 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.529 { 00:19:19.529 "cntlid": 35, 00:19:19.529 "qid": 0, 00:19:19.529 "state": "enabled", 00:19:19.529 "thread": "nvmf_tgt_poll_group_000", 00:19:19.529 "listen_address": { 00:19:19.529 "trtype": "TCP", 00:19:19.529 "adrfam": "IPv4", 00:19:19.529 "traddr": "10.0.0.2", 00:19:19.529 "trsvcid": "4420" 00:19:19.529 }, 00:19:19.529 "peer_address": { 00:19:19.529 "trtype": "TCP", 00:19:19.529 "adrfam": "IPv4", 00:19:19.529 "traddr": "10.0.0.1", 00:19:19.529 "trsvcid": "44962" 00:19:19.529 }, 00:19:19.529 "auth": { 00:19:19.529 "state": "completed", 00:19:19.529 "digest": "sha256", 00:19:19.529 "dhgroup": "ffdhe6144" 00:19:19.529 } 00:19:19.529 } 00:19:19.529 ]' 00:19:19.529 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.785 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.041 15:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:20.973 15:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.973 15:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.973 15:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.973 15:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.973 15:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.974 15:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.974 15:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.974 15:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.538 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.796 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.054 15:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.311 { 00:19:22.311 "cntlid": 37, 00:19:22.311 "qid": 0, 00:19:22.311 "state": "enabled", 00:19:22.311 "thread": "nvmf_tgt_poll_group_000", 00:19:22.311 "listen_address": { 00:19:22.311 "trtype": "TCP", 00:19:22.311 "adrfam": "IPv4", 00:19:22.311 "traddr": "10.0.0.2", 00:19:22.311 "trsvcid": "4420" 00:19:22.311 }, 00:19:22.311 "peer_address": { 00:19:22.311 "trtype": "TCP", 00:19:22.311 "adrfam": "IPv4", 00:19:22.311 "traddr": "10.0.0.1", 00:19:22.311 "trsvcid": "44986" 00:19:22.311 }, 00:19:22.311 "auth": { 00:19:22.311 "state": "completed", 00:19:22.311 "digest": "sha256", 00:19:22.311 "dhgroup": "ffdhe6144" 00:19:22.311 } 00:19:22.311 } 00:19:22.311 ]' 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.311 15:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.569 15:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.534 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.792 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.357 00:19:24.357 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.357 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.357 15:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.615 { 00:19:24.615 "cntlid": 39, 00:19:24.615 "qid": 0, 00:19:24.615 "state": "enabled", 00:19:24.615 "thread": "nvmf_tgt_poll_group_000", 00:19:24.615 "listen_address": { 00:19:24.615 "trtype": "TCP", 00:19:24.615 "adrfam": "IPv4", 00:19:24.615 "traddr": "10.0.0.2", 00:19:24.615 "trsvcid": "4420" 00:19:24.615 }, 00:19:24.615 "peer_address": { 00:19:24.615 "trtype": "TCP", 00:19:24.615 "adrfam": "IPv4", 00:19:24.615 "traddr": "10.0.0.1", 00:19:24.615 "trsvcid": "45002" 00:19:24.615 }, 00:19:24.615 "auth": { 00:19:24.615 "state": "completed", 00:19:24.615 "digest": "sha256", 00:19:24.615 "dhgroup": "ffdhe6144" 00:19:24.615 } 00:19:24.615 } 00:19:24.615 ]' 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.615 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.873 15:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.804 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.062 15:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.995 00:19:26.995 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.995 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.995 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.253 { 00:19:27.253 "cntlid": 41, 00:19:27.253 "qid": 0, 00:19:27.253 "state": "enabled", 00:19:27.253 "thread": "nvmf_tgt_poll_group_000", 00:19:27.253 "listen_address": { 00:19:27.253 "trtype": "TCP", 00:19:27.253 "adrfam": "IPv4", 00:19:27.253 "traddr": "10.0.0.2", 00:19:27.253 "trsvcid": "4420" 00:19:27.253 }, 00:19:27.253 "peer_address": { 00:19:27.253 "trtype": "TCP", 00:19:27.253 "adrfam": "IPv4", 00:19:27.253 "traddr": "10.0.0.1", 00:19:27.253 "trsvcid": "45040" 00:19:27.253 }, 00:19:27.253 "auth": { 00:19:27.253 "state": "completed", 00:19:27.253 "digest": "sha256", 00:19:27.253 "dhgroup": "ffdhe8192" 00:19:27.253 } 00:19:27.253 } 00:19:27.253 ]' 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.253 15:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.511 15:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.511 15:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.511 15:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.511 15:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.511 15:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.768 15:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.702 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.960 15:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.895 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.895 15:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.153 { 00:19:30.153 "cntlid": 43, 00:19:30.153 "qid": 0, 00:19:30.153 "state": "enabled", 00:19:30.153 "thread": "nvmf_tgt_poll_group_000", 00:19:30.153 "listen_address": { 00:19:30.153 "trtype": "TCP", 00:19:30.153 "adrfam": "IPv4", 00:19:30.153 "traddr": "10.0.0.2", 00:19:30.153 "trsvcid": "4420" 00:19:30.153 }, 00:19:30.153 "peer_address": { 00:19:30.153 "trtype": "TCP", 00:19:30.153 "adrfam": "IPv4", 00:19:30.153 "traddr": "10.0.0.1", 00:19:30.153 "trsvcid": "39914" 00:19:30.153 }, 00:19:30.153 "auth": { 00:19:30.153 "state": "completed", 00:19:30.153 "digest": "sha256", 00:19:30.153 "dhgroup": "ffdhe8192" 00:19:30.153 } 00:19:30.153 } 00:19:30.153 ]' 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.153 15:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.410 15:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:31.345 15:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.345 15:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.345 15:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.345 15:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.345 15:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.345 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.345 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.345 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.602 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.603 15:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.536 00:19:32.536 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.536 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.536 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.793 { 00:19:32.793 "cntlid": 45, 00:19:32.793 "qid": 0, 00:19:32.793 "state": "enabled", 00:19:32.793 "thread": "nvmf_tgt_poll_group_000", 00:19:32.793 "listen_address": { 00:19:32.793 "trtype": "TCP", 00:19:32.793 "adrfam": "IPv4", 00:19:32.793 "traddr": "10.0.0.2", 00:19:32.793 "trsvcid": "4420" 00:19:32.793 }, 00:19:32.793 "peer_address": { 00:19:32.793 "trtype": "TCP", 00:19:32.793 "adrfam": "IPv4", 00:19:32.793 "traddr": "10.0.0.1", 00:19:32.793 "trsvcid": "39946" 00:19:32.793 }, 00:19:32.793 "auth": { 00:19:32.793 "state": "completed", 00:19:32.793 "digest": "sha256", 00:19:32.793 "dhgroup": "ffdhe8192" 00:19:32.793 } 00:19:32.793 } 00:19:32.793 ]' 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.793 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.050 15:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.421 15:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.421 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.354 00:19:35.354 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.354 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.354 15:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.613 { 00:19:35.613 "cntlid": 47, 00:19:35.613 "qid": 0, 00:19:35.613 "state": "enabled", 00:19:35.613 "thread": "nvmf_tgt_poll_group_000", 00:19:35.613 "listen_address": { 00:19:35.613 "trtype": "TCP", 00:19:35.613 "adrfam": "IPv4", 00:19:35.613 "traddr": "10.0.0.2", 00:19:35.613 "trsvcid": "4420" 00:19:35.613 }, 00:19:35.613 "peer_address": { 00:19:35.613 "trtype": "TCP", 00:19:35.613 "adrfam": "IPv4", 00:19:35.613 "traddr": "10.0.0.1", 00:19:35.613 "trsvcid": "39978" 00:19:35.613 }, 00:19:35.613 "auth": { 00:19:35.613 "state": "completed", 00:19:35.613 "digest": "sha256", 00:19:35.613 "dhgroup": "ffdhe8192" 00:19:35.613 } 00:19:35.613 } 00:19:35.613 ]' 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.613 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.871 15:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:19:36.802 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.060 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.323 15:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.580 00:19:37.580 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.580 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.580 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.838 { 00:19:37.838 "cntlid": 49, 00:19:37.838 "qid": 0, 00:19:37.838 "state": "enabled", 00:19:37.838 "thread": "nvmf_tgt_poll_group_000", 00:19:37.838 "listen_address": { 00:19:37.838 "trtype": "TCP", 00:19:37.838 "adrfam": "IPv4", 00:19:37.838 "traddr": "10.0.0.2", 00:19:37.838 "trsvcid": "4420" 00:19:37.838 }, 00:19:37.838 "peer_address": { 00:19:37.838 "trtype": "TCP", 00:19:37.838 "adrfam": "IPv4", 00:19:37.838 "traddr": "10.0.0.1", 00:19:37.838 "trsvcid": "53580" 00:19:37.838 }, 00:19:37.838 "auth": { 00:19:37.838 "state": "completed", 00:19:37.838 "digest": "sha384", 00:19:37.838 "dhgroup": "null" 00:19:37.838 } 00:19:37.838 } 00:19:37.838 ]' 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.838 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.095 15:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.027 15:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.591 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.848 00:19:39.848 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.848 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.848 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.106 { 00:19:40.106 "cntlid": 51, 00:19:40.106 "qid": 0, 00:19:40.106 "state": "enabled", 00:19:40.106 "thread": "nvmf_tgt_poll_group_000", 00:19:40.106 "listen_address": { 00:19:40.106 "trtype": "TCP", 00:19:40.106 "adrfam": "IPv4", 00:19:40.106 "traddr": "10.0.0.2", 00:19:40.106 "trsvcid": "4420" 00:19:40.106 }, 00:19:40.106 "peer_address": { 00:19:40.106 "trtype": "TCP", 00:19:40.106 "adrfam": "IPv4", 00:19:40.106 "traddr": "10.0.0.1", 00:19:40.106 "trsvcid": "53614" 00:19:40.106 }, 00:19:40.106 "auth": { 00:19:40.106 "state": "completed", 00:19:40.106 "digest": "sha384", 00:19:40.106 "dhgroup": "null" 00:19:40.106 } 00:19:40.106 } 00:19:40.106 ]' 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.106 15:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.364 15:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.296 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.862 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.120 00:19:42.120 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.120 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.120 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.376 { 00:19:42.376 "cntlid": 53, 00:19:42.376 "qid": 0, 00:19:42.376 "state": "enabled", 00:19:42.376 "thread": "nvmf_tgt_poll_group_000", 00:19:42.376 "listen_address": { 00:19:42.376 "trtype": "TCP", 00:19:42.376 "adrfam": "IPv4", 00:19:42.376 "traddr": "10.0.0.2", 00:19:42.376 "trsvcid": "4420" 00:19:42.376 }, 00:19:42.376 "peer_address": { 00:19:42.376 "trtype": "TCP", 00:19:42.376 "adrfam": "IPv4", 00:19:42.376 "traddr": "10.0.0.1", 00:19:42.376 "trsvcid": "53642" 00:19:42.376 }, 00:19:42.376 "auth": { 00:19:42.376 "state": "completed", 00:19:42.376 "digest": "sha384", 00:19:42.376 "dhgroup": "null" 00:19:42.376 } 00:19:42.376 } 00:19:42.376 ]' 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.376 15:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.376 15:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.376 15:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.376 15:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.376 15:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.376 15:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.633 15:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.568 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.825 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.390 00:19:44.390 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.390 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.390 15:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.390 { 00:19:44.390 "cntlid": 55, 00:19:44.390 "qid": 0, 00:19:44.390 "state": "enabled", 00:19:44.390 "thread": "nvmf_tgt_poll_group_000", 00:19:44.390 "listen_address": { 00:19:44.390 "trtype": "TCP", 00:19:44.390 "adrfam": "IPv4", 00:19:44.390 "traddr": "10.0.0.2", 00:19:44.390 "trsvcid": "4420" 00:19:44.390 }, 00:19:44.390 "peer_address": { 00:19:44.390 "trtype": "TCP", 00:19:44.390 "adrfam": "IPv4", 00:19:44.390 "traddr": "10.0.0.1", 00:19:44.390 "trsvcid": "53672" 00:19:44.390 }, 00:19:44.390 "auth": { 00:19:44.390 "state": "completed", 00:19:44.390 "digest": "sha384", 00:19:44.390 "dhgroup": "null" 00:19:44.390 } 00:19:44.390 } 00:19:44.390 ]' 00:19:44.390 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.647 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.903 15:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:45.835 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.093 15:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.660 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.660 { 00:19:46.660 "cntlid": 57, 00:19:46.660 "qid": 0, 00:19:46.660 "state": "enabled", 00:19:46.660 "thread": "nvmf_tgt_poll_group_000", 00:19:46.660 "listen_address": { 00:19:46.660 "trtype": "TCP", 00:19:46.660 "adrfam": "IPv4", 00:19:46.660 "traddr": "10.0.0.2", 00:19:46.660 "trsvcid": "4420" 00:19:46.660 }, 00:19:46.660 "peer_address": { 00:19:46.660 "trtype": "TCP", 00:19:46.660 "adrfam": "IPv4", 00:19:46.660 "traddr": "10.0.0.1", 00:19:46.660 "trsvcid": "53710" 00:19:46.660 }, 00:19:46.660 "auth": { 00:19:46.660 "state": "completed", 00:19:46.660 "digest": "sha384", 00:19:46.660 "dhgroup": "ffdhe2048" 00:19:46.660 } 00:19:46.660 } 00:19:46.660 ]' 00:19:46.660 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.918 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.176 15:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.109 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.367 15:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.623 00:19:48.623 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.623 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.623 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.880 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.880 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.880 15:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.880 15:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.881 15:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.881 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.881 { 00:19:48.881 "cntlid": 59, 00:19:48.881 "qid": 0, 00:19:48.881 "state": "enabled", 00:19:48.881 "thread": "nvmf_tgt_poll_group_000", 00:19:48.881 "listen_address": { 00:19:48.881 "trtype": "TCP", 00:19:48.881 "adrfam": "IPv4", 00:19:48.881 "traddr": "10.0.0.2", 00:19:48.881 "trsvcid": "4420" 00:19:48.881 }, 00:19:48.881 "peer_address": { 00:19:48.881 "trtype": "TCP", 00:19:48.881 "adrfam": "IPv4", 00:19:48.881 "traddr": "10.0.0.1", 00:19:48.881 "trsvcid": "51278" 00:19:48.881 }, 00:19:48.881 "auth": { 00:19:48.881 "state": "completed", 00:19:48.881 "digest": "sha384", 00:19:48.881 "dhgroup": "ffdhe2048" 00:19:48.881 } 00:19:48.881 } 00:19:48.881 ]' 00:19:48.881 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.881 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.881 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.138 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.138 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.138 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.138 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.138 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.396 15:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.324 15:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.581 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.876 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.149 15:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.406 15:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.406 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.406 { 00:19:51.406 "cntlid": 61, 00:19:51.406 "qid": 0, 00:19:51.406 "state": "enabled", 00:19:51.406 "thread": "nvmf_tgt_poll_group_000", 00:19:51.406 "listen_address": { 00:19:51.406 "trtype": "TCP", 00:19:51.406 "adrfam": "IPv4", 00:19:51.406 "traddr": "10.0.0.2", 00:19:51.406 "trsvcid": "4420" 00:19:51.406 }, 00:19:51.406 "peer_address": { 00:19:51.406 "trtype": "TCP", 00:19:51.406 "adrfam": "IPv4", 00:19:51.406 "traddr": "10.0.0.1", 00:19:51.406 "trsvcid": "51304" 00:19:51.406 }, 00:19:51.406 "auth": { 00:19:51.406 "state": "completed", 00:19:51.406 "digest": "sha384", 00:19:51.406 "dhgroup": "ffdhe2048" 00:19:51.406 } 00:19:51.406 } 00:19:51.406 ]' 00:19:51.406 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.406 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.406 15:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.406 15:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.406 15:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.406 15:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.406 15:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.406 15:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.664 15:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.598 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.855 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.420 00:19:53.420 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.420 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.420 15:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.678 { 00:19:53.678 "cntlid": 63, 00:19:53.678 "qid": 0, 00:19:53.678 "state": "enabled", 00:19:53.678 "thread": "nvmf_tgt_poll_group_000", 00:19:53.678 "listen_address": { 00:19:53.678 "trtype": "TCP", 00:19:53.678 "adrfam": "IPv4", 00:19:53.678 "traddr": "10.0.0.2", 00:19:53.678 "trsvcid": "4420" 00:19:53.678 }, 00:19:53.678 "peer_address": { 00:19:53.678 "trtype": "TCP", 00:19:53.678 "adrfam": "IPv4", 00:19:53.678 "traddr": "10.0.0.1", 00:19:53.678 "trsvcid": "51328" 00:19:53.678 }, 00:19:53.678 "auth": { 00:19:53.678 "state": "completed", 00:19:53.678 "digest": "sha384", 00:19:53.678 "dhgroup": "ffdhe2048" 00:19:53.678 } 00:19:53.678 } 00:19:53.678 ]' 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.678 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.936 15:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.870 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.128 15:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.385 00:19:55.385 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.385 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.385 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.643 { 00:19:55.643 "cntlid": 65, 00:19:55.643 "qid": 0, 00:19:55.643 "state": "enabled", 00:19:55.643 "thread": "nvmf_tgt_poll_group_000", 00:19:55.643 "listen_address": { 00:19:55.643 "trtype": "TCP", 00:19:55.643 "adrfam": "IPv4", 00:19:55.643 "traddr": "10.0.0.2", 00:19:55.643 "trsvcid": "4420" 00:19:55.643 }, 00:19:55.643 "peer_address": { 00:19:55.643 "trtype": "TCP", 00:19:55.643 "adrfam": "IPv4", 00:19:55.643 "traddr": "10.0.0.1", 00:19:55.643 "trsvcid": "51338" 00:19:55.643 }, 00:19:55.643 "auth": { 00:19:55.643 "state": "completed", 00:19:55.643 "digest": "sha384", 00:19:55.643 "dhgroup": "ffdhe3072" 00:19:55.643 } 00:19:55.643 } 00:19:55.643 ]' 00:19:55.643 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.901 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.158 15:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.090 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 15:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.348 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.348 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.605 00:19:57.605 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.605 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.605 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.862 { 00:19:57.862 "cntlid": 67, 00:19:57.862 "qid": 0, 00:19:57.862 "state": "enabled", 00:19:57.862 "thread": "nvmf_tgt_poll_group_000", 00:19:57.862 "listen_address": { 00:19:57.862 "trtype": "TCP", 00:19:57.862 "adrfam": "IPv4", 00:19:57.862 "traddr": "10.0.0.2", 00:19:57.862 "trsvcid": "4420" 00:19:57.862 }, 00:19:57.862 "peer_address": { 00:19:57.862 "trtype": "TCP", 00:19:57.862 "adrfam": "IPv4", 00:19:57.862 "traddr": "10.0.0.1", 00:19:57.862 "trsvcid": "43742" 00:19:57.862 }, 00:19:57.862 "auth": { 00:19:57.862 "state": "completed", 00:19:57.862 "digest": "sha384", 00:19:57.862 "dhgroup": "ffdhe3072" 00:19:57.862 } 00:19:57.862 } 00:19:57.862 ]' 00:19:57.862 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.120 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.376 15:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.310 15:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.568 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.569 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.134 00:20:00.134 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.134 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.134 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.391 { 00:20:00.391 "cntlid": 69, 00:20:00.391 "qid": 0, 00:20:00.391 "state": "enabled", 00:20:00.391 "thread": "nvmf_tgt_poll_group_000", 00:20:00.391 "listen_address": { 00:20:00.391 "trtype": "TCP", 00:20:00.391 "adrfam": "IPv4", 00:20:00.391 "traddr": "10.0.0.2", 00:20:00.391 "trsvcid": "4420" 00:20:00.391 }, 00:20:00.391 "peer_address": { 00:20:00.391 "trtype": "TCP", 00:20:00.391 "adrfam": "IPv4", 00:20:00.391 "traddr": "10.0.0.1", 00:20:00.391 "trsvcid": "43770" 00:20:00.391 }, 00:20:00.391 "auth": { 00:20:00.391 "state": "completed", 00:20:00.391 "digest": "sha384", 00:20:00.391 "dhgroup": "ffdhe3072" 00:20:00.391 } 00:20:00.391 } 00:20:00.391 ]' 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.391 15:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.391 15:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.391 15:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.391 15:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.391 15:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.391 15:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.647 15:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:01.579 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.144 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.401 00:20:02.401 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.401 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.401 15:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.659 { 00:20:02.659 "cntlid": 71, 00:20:02.659 "qid": 0, 00:20:02.659 "state": "enabled", 00:20:02.659 "thread": "nvmf_tgt_poll_group_000", 00:20:02.659 "listen_address": { 00:20:02.659 "trtype": "TCP", 00:20:02.659 "adrfam": "IPv4", 00:20:02.659 "traddr": "10.0.0.2", 00:20:02.659 "trsvcid": "4420" 00:20:02.659 }, 00:20:02.659 "peer_address": { 00:20:02.659 "trtype": "TCP", 00:20:02.659 "adrfam": "IPv4", 00:20:02.659 "traddr": "10.0.0.1", 00:20:02.659 "trsvcid": "43802" 00:20:02.659 }, 00:20:02.659 "auth": { 00:20:02.659 "state": "completed", 00:20:02.659 "digest": "sha384", 00:20:02.659 "dhgroup": "ffdhe3072" 00:20:02.659 } 00:20:02.659 } 00:20:02.659 ]' 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.659 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.916 15:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:20:03.847 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.847 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.847 15:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.847 15:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.104 15:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.104 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.104 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.104 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.104 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.362 15:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.655 00:20:04.655 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.655 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.655 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.934 { 00:20:04.934 "cntlid": 73, 00:20:04.934 "qid": 0, 00:20:04.934 "state": "enabled", 00:20:04.934 "thread": "nvmf_tgt_poll_group_000", 00:20:04.934 "listen_address": { 00:20:04.934 "trtype": "TCP", 00:20:04.934 "adrfam": "IPv4", 00:20:04.934 "traddr": "10.0.0.2", 00:20:04.934 "trsvcid": "4420" 00:20:04.934 }, 00:20:04.934 "peer_address": { 00:20:04.934 "trtype": "TCP", 00:20:04.934 "adrfam": "IPv4", 00:20:04.934 "traddr": "10.0.0.1", 00:20:04.934 "trsvcid": "43826" 00:20:04.934 }, 00:20:04.934 "auth": { 00:20:04.934 "state": "completed", 00:20:04.934 "digest": "sha384", 00:20:04.934 "dhgroup": "ffdhe4096" 00:20:04.934 } 00:20:04.934 } 00:20:04.934 ]' 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.934 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.191 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.191 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.191 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.448 15:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.379 15:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.636 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:06.636 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.637 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.894 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.152 15:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.409 15:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.409 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.409 { 00:20:07.409 "cntlid": 75, 00:20:07.409 "qid": 0, 00:20:07.409 "state": "enabled", 00:20:07.409 "thread": "nvmf_tgt_poll_group_000", 00:20:07.409 "listen_address": { 00:20:07.409 "trtype": "TCP", 00:20:07.409 "adrfam": "IPv4", 00:20:07.409 "traddr": "10.0.0.2", 00:20:07.409 "trsvcid": "4420" 00:20:07.409 }, 00:20:07.409 "peer_address": { 00:20:07.409 "trtype": "TCP", 00:20:07.409 "adrfam": "IPv4", 00:20:07.409 "traddr": "10.0.0.1", 00:20:07.409 "trsvcid": "43852" 00:20:07.409 }, 00:20:07.409 "auth": { 00:20:07.409 "state": "completed", 00:20:07.409 "digest": "sha384", 00:20:07.409 "dhgroup": "ffdhe4096" 00:20:07.410 } 00:20:07.410 } 00:20:07.410 ]' 00:20:07.410 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.410 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.410 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.410 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.410 15:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.410 15:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.410 15:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.410 15:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.667 15:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.599 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.857 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.422 00:20:09.422 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.422 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.422 15:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.679 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.679 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.679 15:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.679 15:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.679 15:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.679 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.679 { 00:20:09.679 "cntlid": 77, 00:20:09.679 "qid": 0, 00:20:09.679 "state": "enabled", 00:20:09.679 "thread": "nvmf_tgt_poll_group_000", 00:20:09.679 "listen_address": { 00:20:09.680 "trtype": "TCP", 00:20:09.680 "adrfam": "IPv4", 00:20:09.680 "traddr": "10.0.0.2", 00:20:09.680 "trsvcid": "4420" 00:20:09.680 }, 00:20:09.680 "peer_address": { 00:20:09.680 "trtype": "TCP", 00:20:09.680 "adrfam": "IPv4", 00:20:09.680 "traddr": "10.0.0.1", 00:20:09.680 "trsvcid": "45052" 00:20:09.680 }, 00:20:09.680 "auth": { 00:20:09.680 "state": "completed", 00:20:09.680 "digest": "sha384", 00:20:09.680 "dhgroup": "ffdhe4096" 00:20:09.680 } 00:20:09.680 } 00:20:09.680 ]' 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.680 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.937 15:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:10.869 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.126 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:11.126 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.126 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.126 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.126 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.126 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.127 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.127 15:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.127 15:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.127 15:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.127 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.127 15:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.691 00:20:11.691 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.691 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.692 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.692 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.692 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.692 15:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.692 15:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.949 { 00:20:11.949 "cntlid": 79, 00:20:11.949 "qid": 0, 00:20:11.949 "state": "enabled", 00:20:11.949 "thread": "nvmf_tgt_poll_group_000", 00:20:11.949 "listen_address": { 00:20:11.949 "trtype": "TCP", 00:20:11.949 "adrfam": "IPv4", 00:20:11.949 "traddr": "10.0.0.2", 00:20:11.949 "trsvcid": "4420" 00:20:11.949 }, 00:20:11.949 "peer_address": { 00:20:11.949 "trtype": "TCP", 00:20:11.949 "adrfam": "IPv4", 00:20:11.949 "traddr": "10.0.0.1", 00:20:11.949 "trsvcid": "45082" 00:20:11.949 }, 00:20:11.949 "auth": { 00:20:11.949 "state": "completed", 00:20:11.949 "digest": "sha384", 00:20:11.949 "dhgroup": "ffdhe4096" 00:20:11.949 } 00:20:11.949 } 00:20:11.949 ]' 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.949 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.218 15:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.162 15:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.420 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.984 00:20:13.984 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.984 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.984 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.243 { 00:20:14.243 "cntlid": 81, 00:20:14.243 "qid": 0, 00:20:14.243 "state": "enabled", 00:20:14.243 "thread": "nvmf_tgt_poll_group_000", 00:20:14.243 "listen_address": { 00:20:14.243 "trtype": "TCP", 00:20:14.243 "adrfam": "IPv4", 00:20:14.243 "traddr": "10.0.0.2", 00:20:14.243 "trsvcid": "4420" 00:20:14.243 }, 00:20:14.243 "peer_address": { 00:20:14.243 "trtype": "TCP", 00:20:14.243 "adrfam": "IPv4", 00:20:14.243 "traddr": "10.0.0.1", 00:20:14.243 "trsvcid": "45108" 00:20:14.243 }, 00:20:14.243 "auth": { 00:20:14.243 "state": "completed", 00:20:14.243 "digest": "sha384", 00:20:14.243 "dhgroup": "ffdhe6144" 00:20:14.243 } 00:20:14.243 } 00:20:14.243 ]' 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.243 15:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.502 15:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.874 15:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.439 00:20:16.439 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.439 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.439 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.697 { 00:20:16.697 "cntlid": 83, 00:20:16.697 "qid": 0, 00:20:16.697 "state": "enabled", 00:20:16.697 "thread": "nvmf_tgt_poll_group_000", 00:20:16.697 "listen_address": { 00:20:16.697 "trtype": "TCP", 00:20:16.697 "adrfam": "IPv4", 00:20:16.697 "traddr": "10.0.0.2", 00:20:16.697 "trsvcid": "4420" 00:20:16.697 }, 00:20:16.697 "peer_address": { 00:20:16.697 "trtype": "TCP", 00:20:16.697 "adrfam": "IPv4", 00:20:16.697 "traddr": "10.0.0.1", 00:20:16.697 "trsvcid": "45144" 00:20:16.697 }, 00:20:16.697 "auth": { 00:20:16.697 "state": "completed", 00:20:16.697 "digest": "sha384", 00:20:16.697 "dhgroup": "ffdhe6144" 00:20:16.697 } 00:20:16.697 } 00:20:16.697 ]' 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.697 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.955 15:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.329 15:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.329 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.912 00:20:18.912 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.912 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.912 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.170 { 00:20:19.170 "cntlid": 85, 00:20:19.170 "qid": 0, 00:20:19.170 "state": "enabled", 00:20:19.170 "thread": "nvmf_tgt_poll_group_000", 00:20:19.170 "listen_address": { 00:20:19.170 "trtype": "TCP", 00:20:19.170 "adrfam": "IPv4", 00:20:19.170 "traddr": "10.0.0.2", 00:20:19.170 "trsvcid": "4420" 00:20:19.170 }, 00:20:19.170 "peer_address": { 00:20:19.170 "trtype": "TCP", 00:20:19.170 "adrfam": "IPv4", 00:20:19.170 "traddr": "10.0.0.1", 00:20:19.170 "trsvcid": "57360" 00:20:19.170 }, 00:20:19.170 "auth": { 00:20:19.170 "state": "completed", 00:20:19.170 "digest": "sha384", 00:20:19.170 "dhgroup": "ffdhe6144" 00:20:19.170 } 00:20:19.170 } 00:20:19.170 ]' 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.170 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.427 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.427 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.427 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.427 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.427 15:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.685 15:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.615 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.873 15:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.438 00:20:21.438 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.438 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.438 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.695 { 00:20:21.695 "cntlid": 87, 00:20:21.695 "qid": 0, 00:20:21.695 "state": "enabled", 00:20:21.695 "thread": "nvmf_tgt_poll_group_000", 00:20:21.695 "listen_address": { 00:20:21.695 "trtype": "TCP", 00:20:21.695 "adrfam": "IPv4", 00:20:21.695 "traddr": "10.0.0.2", 00:20:21.695 "trsvcid": "4420" 00:20:21.695 }, 00:20:21.695 "peer_address": { 00:20:21.695 "trtype": "TCP", 00:20:21.695 "adrfam": "IPv4", 00:20:21.695 "traddr": "10.0.0.1", 00:20:21.695 "trsvcid": "57388" 00:20:21.695 }, 00:20:21.695 "auth": { 00:20:21.695 "state": "completed", 00:20:21.695 "digest": "sha384", 00:20:21.695 "dhgroup": "ffdhe6144" 00:20:21.695 } 00:20:21.695 } 00:20:21.695 ]' 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.695 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.696 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.954 15:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:22.886 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.143 15:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.073 00:20:24.073 15:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.073 15:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.073 15:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.331 { 00:20:24.331 "cntlid": 89, 00:20:24.331 "qid": 0, 00:20:24.331 "state": "enabled", 00:20:24.331 "thread": "nvmf_tgt_poll_group_000", 00:20:24.331 "listen_address": { 00:20:24.331 "trtype": "TCP", 00:20:24.331 "adrfam": "IPv4", 00:20:24.331 "traddr": "10.0.0.2", 00:20:24.331 "trsvcid": "4420" 00:20:24.331 }, 00:20:24.331 "peer_address": { 00:20:24.331 "trtype": "TCP", 00:20:24.331 "adrfam": "IPv4", 00:20:24.331 "traddr": "10.0.0.1", 00:20:24.331 "trsvcid": "57414" 00:20:24.331 }, 00:20:24.331 "auth": { 00:20:24.331 "state": "completed", 00:20:24.331 "digest": "sha384", 00:20:24.331 "dhgroup": "ffdhe8192" 00:20:24.331 } 00:20:24.331 } 00:20:24.331 ]' 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.331 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.589 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.589 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.589 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.589 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.589 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.846 15:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.777 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.034 15:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.967 00:20:26.967 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.967 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.967 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.246 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.246 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.246 15:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 15:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 15:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.246 { 00:20:27.246 "cntlid": 91, 00:20:27.246 "qid": 0, 00:20:27.246 "state": "enabled", 00:20:27.246 "thread": "nvmf_tgt_poll_group_000", 00:20:27.246 "listen_address": { 00:20:27.246 "trtype": "TCP", 00:20:27.246 "adrfam": "IPv4", 00:20:27.246 "traddr": "10.0.0.2", 00:20:27.246 "trsvcid": "4420" 00:20:27.247 }, 00:20:27.247 "peer_address": { 00:20:27.247 "trtype": "TCP", 00:20:27.247 "adrfam": "IPv4", 00:20:27.247 "traddr": "10.0.0.1", 00:20:27.247 "trsvcid": "57438" 00:20:27.247 }, 00:20:27.247 "auth": { 00:20:27.247 "state": "completed", 00:20:27.247 "digest": "sha384", 00:20:27.247 "dhgroup": "ffdhe8192" 00:20:27.247 } 00:20:27.247 } 00:20:27.247 ]' 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.247 15:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.505 15:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.435 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.692 15:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.624 00:20:29.624 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.624 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.624 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.882 { 00:20:29.882 "cntlid": 93, 00:20:29.882 "qid": 0, 00:20:29.882 "state": "enabled", 00:20:29.882 "thread": "nvmf_tgt_poll_group_000", 00:20:29.882 "listen_address": { 00:20:29.882 "trtype": "TCP", 00:20:29.882 "adrfam": "IPv4", 00:20:29.882 "traddr": "10.0.0.2", 00:20:29.882 "trsvcid": "4420" 00:20:29.882 }, 00:20:29.882 "peer_address": { 00:20:29.882 "trtype": "TCP", 00:20:29.882 "adrfam": "IPv4", 00:20:29.882 "traddr": "10.0.0.1", 00:20:29.882 "trsvcid": "55754" 00:20:29.882 }, 00:20:29.882 "auth": { 00:20:29.882 "state": "completed", 00:20:29.882 "digest": "sha384", 00:20:29.882 "dhgroup": "ffdhe8192" 00:20:29.882 } 00:20:29.882 } 00:20:29.882 ]' 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.882 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.138 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.138 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.138 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.138 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.138 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.395 15:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:31.326 15:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.327 15:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.584 15:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.516 00:20:32.516 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.516 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.516 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.773 { 00:20:32.773 "cntlid": 95, 00:20:32.773 "qid": 0, 00:20:32.773 "state": "enabled", 00:20:32.773 "thread": "nvmf_tgt_poll_group_000", 00:20:32.773 "listen_address": { 00:20:32.773 "trtype": "TCP", 00:20:32.773 "adrfam": "IPv4", 00:20:32.773 "traddr": "10.0.0.2", 00:20:32.773 "trsvcid": "4420" 00:20:32.773 }, 00:20:32.773 "peer_address": { 00:20:32.773 "trtype": "TCP", 00:20:32.773 "adrfam": "IPv4", 00:20:32.773 "traddr": "10.0.0.1", 00:20:32.773 "trsvcid": "55782" 00:20:32.773 }, 00:20:32.773 "auth": { 00:20:32.773 "state": "completed", 00:20:32.773 "digest": "sha384", 00:20:32.773 "dhgroup": "ffdhe8192" 00:20:32.773 } 00:20:32.773 } 00:20:32.773 ]' 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.773 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.046 15:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:20:33.998 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.999 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.256 15:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.821 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.821 { 00:20:34.821 "cntlid": 97, 00:20:34.821 "qid": 0, 00:20:34.821 "state": "enabled", 00:20:34.821 "thread": "nvmf_tgt_poll_group_000", 00:20:34.821 "listen_address": { 00:20:34.821 "trtype": "TCP", 00:20:34.821 "adrfam": "IPv4", 00:20:34.821 "traddr": "10.0.0.2", 00:20:34.821 "trsvcid": "4420" 00:20:34.821 }, 00:20:34.821 "peer_address": { 00:20:34.821 "trtype": "TCP", 00:20:34.821 "adrfam": "IPv4", 00:20:34.821 "traddr": "10.0.0.1", 00:20:34.821 "trsvcid": "55808" 00:20:34.821 }, 00:20:34.821 "auth": { 00:20:34.821 "state": "completed", 00:20:34.821 "digest": "sha512", 00:20:34.821 "dhgroup": "null" 00:20:34.821 } 00:20:34.821 } 00:20:34.821 ]' 00:20:34.821 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.078 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.335 15:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.272 15:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.530 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.531 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.788 00:20:36.788 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.788 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.788 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.046 { 00:20:37.046 "cntlid": 99, 00:20:37.046 "qid": 0, 00:20:37.046 "state": "enabled", 00:20:37.046 "thread": "nvmf_tgt_poll_group_000", 00:20:37.046 "listen_address": { 00:20:37.046 "trtype": "TCP", 00:20:37.046 "adrfam": "IPv4", 00:20:37.046 "traddr": "10.0.0.2", 00:20:37.046 "trsvcid": "4420" 00:20:37.046 }, 00:20:37.046 "peer_address": { 00:20:37.046 "trtype": "TCP", 00:20:37.046 "adrfam": "IPv4", 00:20:37.046 "traddr": "10.0.0.1", 00:20:37.046 "trsvcid": "55840" 00:20:37.046 }, 00:20:37.046 "auth": { 00:20:37.046 "state": "completed", 00:20:37.046 "digest": "sha512", 00:20:37.046 "dhgroup": "null" 00:20:37.046 } 00:20:37.046 } 00:20:37.046 ]' 00:20:37.046 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.304 15:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.561 15:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.520 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.779 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.348 00:20:39.348 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.348 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.348 15:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.348 { 00:20:39.348 "cntlid": 101, 00:20:39.348 "qid": 0, 00:20:39.348 "state": "enabled", 00:20:39.348 "thread": "nvmf_tgt_poll_group_000", 00:20:39.348 "listen_address": { 00:20:39.348 "trtype": "TCP", 00:20:39.348 "adrfam": "IPv4", 00:20:39.348 "traddr": "10.0.0.2", 00:20:39.348 "trsvcid": "4420" 00:20:39.348 }, 00:20:39.348 "peer_address": { 00:20:39.348 "trtype": "TCP", 00:20:39.348 "adrfam": "IPv4", 00:20:39.348 "traddr": "10.0.0.1", 00:20:39.348 "trsvcid": "41888" 00:20:39.348 }, 00:20:39.348 "auth": { 00:20:39.348 "state": "completed", 00:20:39.348 "digest": "sha512", 00:20:39.348 "dhgroup": "null" 00:20:39.348 } 00:20:39.348 } 00:20:39.348 ]' 00:20:39.348 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.607 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.865 15:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:40.802 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.061 15:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.630 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.630 { 00:20:41.630 "cntlid": 103, 00:20:41.630 "qid": 0, 00:20:41.630 "state": "enabled", 00:20:41.630 "thread": "nvmf_tgt_poll_group_000", 00:20:41.630 "listen_address": { 00:20:41.630 "trtype": "TCP", 00:20:41.630 "adrfam": "IPv4", 00:20:41.630 "traddr": "10.0.0.2", 00:20:41.630 "trsvcid": "4420" 00:20:41.630 }, 00:20:41.630 "peer_address": { 00:20:41.630 "trtype": "TCP", 00:20:41.630 "adrfam": "IPv4", 00:20:41.630 "traddr": "10.0.0.1", 00:20:41.630 "trsvcid": "41912" 00:20:41.630 }, 00:20:41.630 "auth": { 00:20:41.630 "state": "completed", 00:20:41.630 "digest": "sha512", 00:20:41.630 "dhgroup": "null" 00:20:41.630 } 00:20:41.630 } 00:20:41.630 ]' 00:20:41.630 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.888 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.147 15:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.080 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.338 15:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.597 00:20:43.597 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.597 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.597 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.855 { 00:20:43.855 "cntlid": 105, 00:20:43.855 "qid": 0, 00:20:43.855 "state": "enabled", 00:20:43.855 "thread": "nvmf_tgt_poll_group_000", 00:20:43.855 "listen_address": { 00:20:43.855 "trtype": "TCP", 00:20:43.855 "adrfam": "IPv4", 00:20:43.855 "traddr": "10.0.0.2", 00:20:43.855 "trsvcid": "4420" 00:20:43.855 }, 00:20:43.855 "peer_address": { 00:20:43.855 "trtype": "TCP", 00:20:43.855 "adrfam": "IPv4", 00:20:43.855 "traddr": "10.0.0.1", 00:20:43.855 "trsvcid": "41940" 00:20:43.855 }, 00:20:43.855 "auth": { 00:20:43.855 "state": "completed", 00:20:43.855 "digest": "sha512", 00:20:43.855 "dhgroup": "ffdhe2048" 00:20:43.855 } 00:20:43.855 } 00:20:43.855 ]' 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.855 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.113 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.113 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.113 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.113 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.113 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.371 15:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.308 15:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.567 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.825 00:20:45.825 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.825 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.825 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.082 { 00:20:46.082 "cntlid": 107, 00:20:46.082 "qid": 0, 00:20:46.082 "state": "enabled", 00:20:46.082 "thread": "nvmf_tgt_poll_group_000", 00:20:46.082 "listen_address": { 00:20:46.082 "trtype": "TCP", 00:20:46.082 "adrfam": "IPv4", 00:20:46.082 "traddr": "10.0.0.2", 00:20:46.082 "trsvcid": "4420" 00:20:46.082 }, 00:20:46.082 "peer_address": { 00:20:46.082 "trtype": "TCP", 00:20:46.082 "adrfam": "IPv4", 00:20:46.082 "traddr": "10.0.0.1", 00:20:46.082 "trsvcid": "41972" 00:20:46.082 }, 00:20:46.082 "auth": { 00:20:46.082 "state": "completed", 00:20:46.082 "digest": "sha512", 00:20:46.082 "dhgroup": "ffdhe2048" 00:20:46.082 } 00:20:46.082 } 00:20:46.082 ]' 00:20:46.082 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.372 15:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.629 15:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.563 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:47.820 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:47.820 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.820 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.820 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.821 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.078 00:20:48.078 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.078 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.078 15:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.335 { 00:20:48.335 "cntlid": 109, 00:20:48.335 "qid": 0, 00:20:48.335 "state": "enabled", 00:20:48.335 "thread": "nvmf_tgt_poll_group_000", 00:20:48.335 "listen_address": { 00:20:48.335 "trtype": "TCP", 00:20:48.335 "adrfam": "IPv4", 00:20:48.335 "traddr": "10.0.0.2", 00:20:48.335 "trsvcid": "4420" 00:20:48.335 }, 00:20:48.335 "peer_address": { 00:20:48.335 "trtype": "TCP", 00:20:48.335 "adrfam": "IPv4", 00:20:48.335 "traddr": "10.0.0.1", 00:20:48.335 "trsvcid": "60928" 00:20:48.335 }, 00:20:48.335 "auth": { 00:20:48.335 "state": "completed", 00:20:48.335 "digest": "sha512", 00:20:48.335 "dhgroup": "ffdhe2048" 00:20:48.335 } 00:20:48.335 } 00:20:48.335 ]' 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.335 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.593 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.593 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.593 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.593 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.593 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.850 15:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:49.784 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.042 15:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.299 00:20:50.299 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.299 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.299 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.556 { 00:20:50.556 "cntlid": 111, 00:20:50.556 "qid": 0, 00:20:50.556 "state": "enabled", 00:20:50.556 "thread": "nvmf_tgt_poll_group_000", 00:20:50.556 "listen_address": { 00:20:50.556 "trtype": "TCP", 00:20:50.556 "adrfam": "IPv4", 00:20:50.556 "traddr": "10.0.0.2", 00:20:50.556 "trsvcid": "4420" 00:20:50.556 }, 00:20:50.556 "peer_address": { 00:20:50.556 "trtype": "TCP", 00:20:50.556 "adrfam": "IPv4", 00:20:50.556 "traddr": "10.0.0.1", 00:20:50.556 "trsvcid": "60946" 00:20:50.556 }, 00:20:50.556 "auth": { 00:20:50.556 "state": "completed", 00:20:50.556 "digest": "sha512", 00:20:50.556 "dhgroup": "ffdhe2048" 00:20:50.556 } 00:20:50.556 } 00:20:50.556 ]' 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.556 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.814 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.814 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.814 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.814 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.814 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.072 15:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.004 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.262 15:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.518 00:20:52.518 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.518 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.518 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.775 { 00:20:52.775 "cntlid": 113, 00:20:52.775 "qid": 0, 00:20:52.775 "state": "enabled", 00:20:52.775 "thread": "nvmf_tgt_poll_group_000", 00:20:52.775 "listen_address": { 00:20:52.775 "trtype": "TCP", 00:20:52.775 "adrfam": "IPv4", 00:20:52.775 "traddr": "10.0.0.2", 00:20:52.775 "trsvcid": "4420" 00:20:52.775 }, 00:20:52.775 "peer_address": { 00:20:52.775 "trtype": "TCP", 00:20:52.775 "adrfam": "IPv4", 00:20:52.775 "traddr": "10.0.0.1", 00:20:52.775 "trsvcid": "60966" 00:20:52.775 }, 00:20:52.775 "auth": { 00:20:52.775 "state": "completed", 00:20:52.775 "digest": "sha512", 00:20:52.775 "dhgroup": "ffdhe3072" 00:20:52.775 } 00:20:52.775 } 00:20:52.775 ]' 00:20:52.775 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.033 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.291 15:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.225 15:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.483 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.739 00:20:54.739 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.739 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.739 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.996 { 00:20:54.996 "cntlid": 115, 00:20:54.996 "qid": 0, 00:20:54.996 "state": "enabled", 00:20:54.996 "thread": "nvmf_tgt_poll_group_000", 00:20:54.996 "listen_address": { 00:20:54.996 "trtype": "TCP", 00:20:54.996 "adrfam": "IPv4", 00:20:54.996 "traddr": "10.0.0.2", 00:20:54.996 "trsvcid": "4420" 00:20:54.996 }, 00:20:54.996 "peer_address": { 00:20:54.996 "trtype": "TCP", 00:20:54.996 "adrfam": "IPv4", 00:20:54.996 "traddr": "10.0.0.1", 00:20:54.996 "trsvcid": "32768" 00:20:54.996 }, 00:20:54.996 "auth": { 00:20:54.996 "state": "completed", 00:20:54.996 "digest": "sha512", 00:20:54.996 "dhgroup": "ffdhe3072" 00:20:54.996 } 00:20:54.996 } 00:20:54.996 ]' 00:20:54.996 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.254 15:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.512 15:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.444 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.702 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.960 00:20:56.960 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.960 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.960 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.219 { 00:20:57.219 "cntlid": 117, 00:20:57.219 "qid": 0, 00:20:57.219 "state": "enabled", 00:20:57.219 "thread": "nvmf_tgt_poll_group_000", 00:20:57.219 "listen_address": { 00:20:57.219 "trtype": "TCP", 00:20:57.219 "adrfam": "IPv4", 00:20:57.219 "traddr": "10.0.0.2", 00:20:57.219 "trsvcid": "4420" 00:20:57.219 }, 00:20:57.219 "peer_address": { 00:20:57.219 "trtype": "TCP", 00:20:57.219 "adrfam": "IPv4", 00:20:57.219 "traddr": "10.0.0.1", 00:20:57.219 "trsvcid": "32792" 00:20:57.219 }, 00:20:57.219 "auth": { 00:20:57.219 "state": "completed", 00:20:57.219 "digest": "sha512", 00:20:57.219 "dhgroup": "ffdhe3072" 00:20:57.219 } 00:20:57.219 } 00:20:57.219 ]' 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.219 15:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.477 15:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.477 15:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.477 15:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.477 15:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.477 15:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.735 15:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.669 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.928 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.186 00:20:59.186 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.186 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.186 15:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.444 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.445 { 00:20:59.445 "cntlid": 119, 00:20:59.445 "qid": 0, 00:20:59.445 "state": "enabled", 00:20:59.445 "thread": "nvmf_tgt_poll_group_000", 00:20:59.445 "listen_address": { 00:20:59.445 "trtype": "TCP", 00:20:59.445 "adrfam": "IPv4", 00:20:59.445 "traddr": "10.0.0.2", 00:20:59.445 "trsvcid": "4420" 00:20:59.445 }, 00:20:59.445 "peer_address": { 00:20:59.445 "trtype": "TCP", 00:20:59.445 "adrfam": "IPv4", 00:20:59.445 "traddr": "10.0.0.1", 00:20:59.445 "trsvcid": "60666" 00:20:59.445 }, 00:20:59.445 "auth": { 00:20:59.445 "state": "completed", 00:20:59.445 "digest": "sha512", 00:20:59.445 "dhgroup": "ffdhe3072" 00:20:59.445 } 00:20:59.445 } 00:20:59.445 ]' 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.445 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.728 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.728 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.728 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.728 15:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.670 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.928 15:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.494 00:21:01.494 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.494 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.494 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.753 { 00:21:01.753 "cntlid": 121, 00:21:01.753 "qid": 0, 00:21:01.753 "state": "enabled", 00:21:01.753 "thread": "nvmf_tgt_poll_group_000", 00:21:01.753 "listen_address": { 00:21:01.753 "trtype": "TCP", 00:21:01.753 "adrfam": "IPv4", 00:21:01.753 "traddr": "10.0.0.2", 00:21:01.753 "trsvcid": "4420" 00:21:01.753 }, 00:21:01.753 "peer_address": { 00:21:01.753 "trtype": "TCP", 00:21:01.753 "adrfam": "IPv4", 00:21:01.753 "traddr": "10.0.0.1", 00:21:01.753 "trsvcid": "60692" 00:21:01.753 }, 00:21:01.753 "auth": { 00:21:01.753 "state": "completed", 00:21:01.753 "digest": "sha512", 00:21:01.753 "dhgroup": "ffdhe4096" 00:21:01.753 } 00:21:01.753 } 00:21:01.753 ]' 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.753 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.011 15:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.941 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.198 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.199 15:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.762 00:21:03.762 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.762 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.762 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.019 { 00:21:04.019 "cntlid": 123, 00:21:04.019 "qid": 0, 00:21:04.019 "state": "enabled", 00:21:04.019 "thread": "nvmf_tgt_poll_group_000", 00:21:04.019 "listen_address": { 00:21:04.019 "trtype": "TCP", 00:21:04.019 "adrfam": "IPv4", 00:21:04.019 "traddr": "10.0.0.2", 00:21:04.019 "trsvcid": "4420" 00:21:04.019 }, 00:21:04.019 "peer_address": { 00:21:04.019 "trtype": "TCP", 00:21:04.019 "adrfam": "IPv4", 00:21:04.019 "traddr": "10.0.0.1", 00:21:04.019 "trsvcid": "60720" 00:21:04.019 }, 00:21:04.019 "auth": { 00:21:04.019 "state": "completed", 00:21:04.019 "digest": "sha512", 00:21:04.019 "dhgroup": "ffdhe4096" 00:21:04.019 } 00:21:04.019 } 00:21:04.019 ]' 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.019 15:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.278 15:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.650 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.216 00:21:06.216 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.216 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.216 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.474 15:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.474 { 00:21:06.474 "cntlid": 125, 00:21:06.474 "qid": 0, 00:21:06.474 "state": "enabled", 00:21:06.474 "thread": "nvmf_tgt_poll_group_000", 00:21:06.474 "listen_address": { 00:21:06.474 "trtype": "TCP", 00:21:06.474 "adrfam": "IPv4", 00:21:06.474 "traddr": "10.0.0.2", 00:21:06.474 "trsvcid": "4420" 00:21:06.474 }, 00:21:06.474 "peer_address": { 00:21:06.474 "trtype": "TCP", 00:21:06.474 "adrfam": "IPv4", 00:21:06.474 "traddr": "10.0.0.1", 00:21:06.474 "trsvcid": "60746" 00:21:06.474 }, 00:21:06.474 "auth": { 00:21:06.474 "state": "completed", 00:21:06.474 "digest": "sha512", 00:21:06.474 "dhgroup": "ffdhe4096" 00:21:06.474 } 00:21:06.474 } 00:21:06.474 ]' 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.474 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.733 15:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.668 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.926 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.491 00:21:08.491 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.491 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.491 15:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.491 { 00:21:08.491 "cntlid": 127, 00:21:08.491 "qid": 0, 00:21:08.491 "state": "enabled", 00:21:08.491 "thread": "nvmf_tgt_poll_group_000", 00:21:08.491 "listen_address": { 00:21:08.491 "trtype": "TCP", 00:21:08.491 "adrfam": "IPv4", 00:21:08.491 "traddr": "10.0.0.2", 00:21:08.491 "trsvcid": "4420" 00:21:08.491 }, 00:21:08.491 "peer_address": { 00:21:08.491 "trtype": "TCP", 00:21:08.491 "adrfam": "IPv4", 00:21:08.491 "traddr": "10.0.0.1", 00:21:08.491 "trsvcid": "32798" 00:21:08.491 }, 00:21:08.491 "auth": { 00:21:08.491 "state": "completed", 00:21:08.491 "digest": "sha512", 00:21:08.491 "dhgroup": "ffdhe4096" 00:21:08.491 } 00:21:08.491 } 00:21:08.491 ]' 00:21:08.491 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.748 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.006 15:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.939 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.197 15:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.760 00:21:10.760 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.760 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.760 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.017 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.017 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.017 15:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.017 15:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.017 15:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.017 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.017 { 00:21:11.018 "cntlid": 129, 00:21:11.018 "qid": 0, 00:21:11.018 "state": "enabled", 00:21:11.018 "thread": "nvmf_tgt_poll_group_000", 00:21:11.018 "listen_address": { 00:21:11.018 "trtype": "TCP", 00:21:11.018 "adrfam": "IPv4", 00:21:11.018 "traddr": "10.0.0.2", 00:21:11.018 "trsvcid": "4420" 00:21:11.018 }, 00:21:11.018 "peer_address": { 00:21:11.018 "trtype": "TCP", 00:21:11.018 "adrfam": "IPv4", 00:21:11.018 "traddr": "10.0.0.1", 00:21:11.018 "trsvcid": "32828" 00:21:11.018 }, 00:21:11.018 "auth": { 00:21:11.018 "state": "completed", 00:21:11.018 "digest": "sha512", 00:21:11.018 "dhgroup": "ffdhe6144" 00:21:11.018 } 00:21:11.018 } 00:21:11.018 ]' 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.018 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.275 15:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.213 15:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.470 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.031 00:21:13.031 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.031 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.031 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.287 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.287 15:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.287 15:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.287 15:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.287 15:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.287 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.287 { 00:21:13.287 "cntlid": 131, 00:21:13.287 "qid": 0, 00:21:13.287 "state": "enabled", 00:21:13.287 "thread": "nvmf_tgt_poll_group_000", 00:21:13.287 "listen_address": { 00:21:13.287 "trtype": "TCP", 00:21:13.287 "adrfam": "IPv4", 00:21:13.287 "traddr": "10.0.0.2", 00:21:13.287 "trsvcid": "4420" 00:21:13.287 }, 00:21:13.287 "peer_address": { 00:21:13.287 "trtype": "TCP", 00:21:13.287 "adrfam": "IPv4", 00:21:13.287 "traddr": "10.0.0.1", 00:21:13.287 "trsvcid": "32864" 00:21:13.287 }, 00:21:13.287 "auth": { 00:21:13.287 "state": "completed", 00:21:13.287 "digest": "sha512", 00:21:13.287 "dhgroup": "ffdhe6144" 00:21:13.287 } 00:21:13.287 } 00:21:13.287 ]' 00:21:13.287 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.287 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.544 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.544 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.544 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.544 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.544 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.544 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.806 15:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:21:14.783 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.783 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.783 15:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.783 15:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.783 15:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.783 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.784 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.784 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.042 15:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.606 00:21:15.606 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.607 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.607 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.865 { 00:21:15.865 "cntlid": 133, 00:21:15.865 "qid": 0, 00:21:15.865 "state": "enabled", 00:21:15.865 "thread": "nvmf_tgt_poll_group_000", 00:21:15.865 "listen_address": { 00:21:15.865 "trtype": "TCP", 00:21:15.865 "adrfam": "IPv4", 00:21:15.865 "traddr": "10.0.0.2", 00:21:15.865 "trsvcid": "4420" 00:21:15.865 }, 00:21:15.865 "peer_address": { 00:21:15.865 "trtype": "TCP", 00:21:15.865 "adrfam": "IPv4", 00:21:15.865 "traddr": "10.0.0.1", 00:21:15.865 "trsvcid": "32878" 00:21:15.865 }, 00:21:15.865 "auth": { 00:21:15.865 "state": "completed", 00:21:15.865 "digest": "sha512", 00:21:15.865 "dhgroup": "ffdhe6144" 00:21:15.865 } 00:21:15.865 } 00:21:15.865 ]' 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.865 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.123 15:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:21:17.056 15:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.056 15:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.056 15:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.056 15:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.315 15:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.315 15:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.315 15:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.315 15:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.572 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:17.572 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.572 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.572 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.573 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.138 00:21:18.138 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.138 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.138 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.396 { 00:21:18.396 "cntlid": 135, 00:21:18.396 "qid": 0, 00:21:18.396 "state": "enabled", 00:21:18.396 "thread": "nvmf_tgt_poll_group_000", 00:21:18.396 "listen_address": { 00:21:18.396 "trtype": "TCP", 00:21:18.396 "adrfam": "IPv4", 00:21:18.396 "traddr": "10.0.0.2", 00:21:18.396 "trsvcid": "4420" 00:21:18.396 }, 00:21:18.396 "peer_address": { 00:21:18.396 "trtype": "TCP", 00:21:18.396 "adrfam": "IPv4", 00:21:18.396 "traddr": "10.0.0.1", 00:21:18.396 "trsvcid": "35764" 00:21:18.396 }, 00:21:18.396 "auth": { 00:21:18.396 "state": "completed", 00:21:18.396 "digest": "sha512", 00:21:18.396 "dhgroup": "ffdhe6144" 00:21:18.396 } 00:21:18.396 } 00:21:18.396 ]' 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:18.396 15:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.396 15:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.396 15:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.396 15:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.654 15:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.587 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.845 15:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.780 00:21:20.780 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.780 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.780 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.038 { 00:21:21.038 "cntlid": 137, 00:21:21.038 "qid": 0, 00:21:21.038 "state": "enabled", 00:21:21.038 "thread": "nvmf_tgt_poll_group_000", 00:21:21.038 "listen_address": { 00:21:21.038 "trtype": "TCP", 00:21:21.038 "adrfam": "IPv4", 00:21:21.038 "traddr": "10.0.0.2", 00:21:21.038 "trsvcid": "4420" 00:21:21.038 }, 00:21:21.038 "peer_address": { 00:21:21.038 "trtype": "TCP", 00:21:21.038 "adrfam": "IPv4", 00:21:21.038 "traddr": "10.0.0.1", 00:21:21.038 "trsvcid": "35784" 00:21:21.038 }, 00:21:21.038 "auth": { 00:21:21.038 "state": "completed", 00:21:21.038 "digest": "sha512", 00:21:21.038 "dhgroup": "ffdhe8192" 00:21:21.038 } 00:21:21.038 } 00:21:21.038 ]' 00:21:21.038 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.296 15:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.554 15:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.487 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.744 15:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.677 00:21:23.677 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.677 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.677 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.935 { 00:21:23.935 "cntlid": 139, 00:21:23.935 "qid": 0, 00:21:23.935 "state": "enabled", 00:21:23.935 "thread": "nvmf_tgt_poll_group_000", 00:21:23.935 "listen_address": { 00:21:23.935 "trtype": "TCP", 00:21:23.935 "adrfam": "IPv4", 00:21:23.935 "traddr": "10.0.0.2", 00:21:23.935 "trsvcid": "4420" 00:21:23.935 }, 00:21:23.935 "peer_address": { 00:21:23.935 "trtype": "TCP", 00:21:23.935 "adrfam": "IPv4", 00:21:23.935 "traddr": "10.0.0.1", 00:21:23.935 "trsvcid": "35816" 00:21:23.935 }, 00:21:23.935 "auth": { 00:21:23.935 "state": "completed", 00:21:23.935 "digest": "sha512", 00:21:23.935 "dhgroup": "ffdhe8192" 00:21:23.935 } 00:21:23.935 } 00:21:23.935 ]' 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.935 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.193 15:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjY2MGJkYjA5ZDNlMDY0ZjZlODYxN2VlMDg1OWY2MzLRCdob: --dhchap-ctrl-secret DHHC-1:02:MjZjZTRkNWMwNzA4MWY5NjJkNmI1YmY1NGVlNGFjYTgxOTM4M2EyZmE3ODNiNGYxdMlbGA==: 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.126 15:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.384 15:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.318 00:21:26.318 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.318 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.318 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.577 { 00:21:26.577 "cntlid": 141, 00:21:26.577 "qid": 0, 00:21:26.577 "state": "enabled", 00:21:26.577 "thread": "nvmf_tgt_poll_group_000", 00:21:26.577 "listen_address": { 00:21:26.577 "trtype": "TCP", 00:21:26.577 "adrfam": "IPv4", 00:21:26.577 "traddr": "10.0.0.2", 00:21:26.577 "trsvcid": "4420" 00:21:26.577 }, 00:21:26.577 "peer_address": { 00:21:26.577 "trtype": "TCP", 00:21:26.577 "adrfam": "IPv4", 00:21:26.577 "traddr": "10.0.0.1", 00:21:26.577 "trsvcid": "35838" 00:21:26.577 }, 00:21:26.577 "auth": { 00:21:26.577 "state": "completed", 00:21:26.577 "digest": "sha512", 00:21:26.577 "dhgroup": "ffdhe8192" 00:21:26.577 } 00:21:26.577 } 00:21:26.577 ]' 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.577 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.835 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.835 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.835 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.835 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.835 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.093 15:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NDhjYTNjOTYwYzM0ZjA4OWQ2NWI2MTEwNDVkN2ZjY2UwYTVlNjE4ZWU0OTM1OGNifq4+GA==: --dhchap-ctrl-secret DHHC-1:01:MDk1N2QwYjQ4MGZkYzY4ODc5NWY3NWUzYThjZmFmOGMA7b0d: 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.057 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.321 15:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.322 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.322 15:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.252 00:21:29.252 15:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.252 15:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.252 15:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.509 { 00:21:29.509 "cntlid": 143, 00:21:29.509 "qid": 0, 00:21:29.509 "state": "enabled", 00:21:29.509 "thread": "nvmf_tgt_poll_group_000", 00:21:29.509 "listen_address": { 00:21:29.509 "trtype": "TCP", 00:21:29.509 "adrfam": "IPv4", 00:21:29.509 "traddr": "10.0.0.2", 00:21:29.509 "trsvcid": "4420" 00:21:29.509 }, 00:21:29.509 "peer_address": { 00:21:29.509 "trtype": "TCP", 00:21:29.509 "adrfam": "IPv4", 00:21:29.509 "traddr": "10.0.0.1", 00:21:29.509 "trsvcid": "40554" 00:21:29.509 }, 00:21:29.509 "auth": { 00:21:29.509 "state": "completed", 00:21:29.509 "digest": "sha512", 00:21:29.509 "dhgroup": "ffdhe8192" 00:21:29.509 } 00:21:29.509 } 00:21:29.509 ]' 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.509 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.767 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.767 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.767 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.025 15:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:30.956 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.214 15:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.146 00:21:32.146 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.146 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.146 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.403 { 00:21:32.403 "cntlid": 145, 00:21:32.403 "qid": 0, 00:21:32.403 "state": "enabled", 00:21:32.403 "thread": "nvmf_tgt_poll_group_000", 00:21:32.403 "listen_address": { 00:21:32.403 "trtype": "TCP", 00:21:32.403 "adrfam": "IPv4", 00:21:32.403 "traddr": "10.0.0.2", 00:21:32.403 "trsvcid": "4420" 00:21:32.403 }, 00:21:32.403 "peer_address": { 00:21:32.403 "trtype": "TCP", 00:21:32.403 "adrfam": "IPv4", 00:21:32.403 "traddr": "10.0.0.1", 00:21:32.403 "trsvcid": "40584" 00:21:32.403 }, 00:21:32.403 "auth": { 00:21:32.403 "state": "completed", 00:21:32.403 "digest": "sha512", 00:21:32.403 "dhgroup": "ffdhe8192" 00:21:32.403 } 00:21:32.403 } 00:21:32.403 ]' 00:21:32.403 15:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.403 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.661 15:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDU4ZjkxOGU0MzFiYzJiNzJmNDhmMDgwMmIyYmYxZTY2ZTQ2ODBkY2IzZjkyOWQ2FRlmSg==: --dhchap-ctrl-secret DHHC-1:03:Y2MyNWJjOWEwNjhlZTA0MDA0YWQ1ODBiNzg5OTFlMmE4MGU4OTAwYmZhYjFmY2Y1YmE0MzlmOGY0ODdjZjEyYoFFa1c=: 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:33.594 15:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:34.527 request: 00:21:34.527 { 00:21:34.527 "name": "nvme0", 00:21:34.527 "trtype": "tcp", 00:21:34.527 "traddr": "10.0.0.2", 00:21:34.527 "adrfam": "ipv4", 00:21:34.527 "trsvcid": "4420", 00:21:34.527 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.527 "prchk_reftag": false, 00:21:34.527 "prchk_guard": false, 00:21:34.527 "hdgst": false, 00:21:34.527 "ddgst": false, 00:21:34.527 "dhchap_key": "key2", 00:21:34.527 "method": "bdev_nvme_attach_controller", 00:21:34.527 "req_id": 1 00:21:34.527 } 00:21:34.527 Got JSON-RPC error response 00:21:34.527 response: 00:21:34.527 { 00:21:34.527 "code": -5, 00:21:34.527 "message": "Input/output error" 00:21:34.527 } 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:34.527 15:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:35.460 request: 00:21:35.460 { 00:21:35.460 "name": "nvme0", 00:21:35.460 "trtype": "tcp", 00:21:35.460 "traddr": "10.0.0.2", 00:21:35.460 "adrfam": "ipv4", 00:21:35.460 "trsvcid": "4420", 00:21:35.460 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.460 "prchk_reftag": false, 00:21:35.460 "prchk_guard": false, 00:21:35.460 "hdgst": false, 00:21:35.460 "ddgst": false, 00:21:35.460 "dhchap_key": "key1", 00:21:35.460 "dhchap_ctrlr_key": "ckey2", 00:21:35.460 "method": "bdev_nvme_attach_controller", 00:21:35.460 "req_id": 1 00:21:35.460 } 00:21:35.460 Got JSON-RPC error response 00:21:35.460 response: 00:21:35.460 { 00:21:35.460 "code": -5, 00:21:35.460 "message": "Input/output error" 00:21:35.460 } 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.460 15:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.460 15:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.392 request: 00:21:36.392 { 00:21:36.392 "name": "nvme0", 00:21:36.392 "trtype": "tcp", 00:21:36.392 "traddr": "10.0.0.2", 00:21:36.392 "adrfam": "ipv4", 00:21:36.392 "trsvcid": "4420", 00:21:36.392 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.392 "prchk_reftag": false, 00:21:36.392 "prchk_guard": false, 00:21:36.392 "hdgst": false, 00:21:36.392 "ddgst": false, 00:21:36.392 "dhchap_key": "key1", 00:21:36.392 "dhchap_ctrlr_key": "ckey1", 00:21:36.392 "method": "bdev_nvme_attach_controller", 00:21:36.392 "req_id": 1 00:21:36.392 } 00:21:36.392 Got JSON-RPC error response 00:21:36.392 response: 00:21:36.392 { 00:21:36.392 "code": -5, 00:21:36.392 "message": "Input/output error" 00:21:36.392 } 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1112936 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1112936 ']' 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1112936 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1112936 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1112936' 00:21:36.392 killing process with pid 1112936 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1112936 00:21:36.392 15:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1112936 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1135267 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1135267 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1135267 ']' 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.649 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1135267 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1135267 ']' 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.907 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:37.165 15:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.099 00:21:38.099 15:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.099 15:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.099 15:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.356 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.356 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.356 15:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.356 15:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.356 15:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.356 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.356 { 00:21:38.356 "cntlid": 1, 00:21:38.356 "qid": 0, 00:21:38.356 "state": "enabled", 00:21:38.356 "thread": "nvmf_tgt_poll_group_000", 00:21:38.356 "listen_address": { 00:21:38.356 "trtype": "TCP", 00:21:38.356 "adrfam": "IPv4", 00:21:38.356 "traddr": "10.0.0.2", 00:21:38.356 "trsvcid": "4420" 00:21:38.356 }, 00:21:38.356 "peer_address": { 00:21:38.356 "trtype": "TCP", 00:21:38.356 "adrfam": "IPv4", 00:21:38.356 "traddr": "10.0.0.1", 00:21:38.356 "trsvcid": "37916" 00:21:38.356 }, 00:21:38.356 "auth": { 00:21:38.356 "state": "completed", 00:21:38.356 "digest": "sha512", 00:21:38.357 "dhgroup": "ffdhe8192" 00:21:38.357 } 00:21:38.357 } 00:21:38.357 ]' 00:21:38.357 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.357 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.357 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.357 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.357 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.615 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.615 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.615 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.873 15:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OTUwM2RjNTUyYTBmYjQ5MDc5M2FmYzk3MjA5YmNmMmFhYzM0Y2I3Yjk4ZGEzOTdiNzlhNDk0MTUzMmQ2OTZlN0S9laE=: 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:39.804 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.062 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.319 request: 00:21:40.319 { 00:21:40.319 "name": "nvme0", 00:21:40.319 "trtype": "tcp", 00:21:40.319 "traddr": "10.0.0.2", 00:21:40.319 "adrfam": "ipv4", 00:21:40.319 "trsvcid": "4420", 00:21:40.319 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.319 "prchk_reftag": false, 00:21:40.319 "prchk_guard": false, 00:21:40.319 "hdgst": false, 00:21:40.319 "ddgst": false, 00:21:40.319 "dhchap_key": "key3", 00:21:40.319 "method": "bdev_nvme_attach_controller", 00:21:40.319 "req_id": 1 00:21:40.319 } 00:21:40.319 Got JSON-RPC error response 00:21:40.319 response: 00:21:40.319 { 00:21:40.319 "code": -5, 00:21:40.319 "message": "Input/output error" 00:21:40.319 } 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:40.319 15:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.577 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.835 request: 00:21:40.835 { 00:21:40.835 "name": "nvme0", 00:21:40.835 "trtype": "tcp", 00:21:40.835 "traddr": "10.0.0.2", 00:21:40.835 "adrfam": "ipv4", 00:21:40.835 "trsvcid": "4420", 00:21:40.835 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.835 "prchk_reftag": false, 00:21:40.835 "prchk_guard": false, 00:21:40.835 "hdgst": false, 00:21:40.835 "ddgst": false, 00:21:40.835 "dhchap_key": "key3", 00:21:40.835 "method": "bdev_nvme_attach_controller", 00:21:40.835 "req_id": 1 00:21:40.835 } 00:21:40.835 Got JSON-RPC error response 00:21:40.835 response: 00:21:40.835 { 00:21:40.835 "code": -5, 00:21:40.835 "message": "Input/output error" 00:21:40.835 } 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.835 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.093 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:41.351 request: 00:21:41.351 { 00:21:41.351 "name": "nvme0", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.351 "prchk_reftag": false, 00:21:41.351 "prchk_guard": false, 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false, 00:21:41.351 "dhchap_key": "key0", 00:21:41.351 "dhchap_ctrlr_key": "key1", 00:21:41.351 "method": "bdev_nvme_attach_controller", 00:21:41.351 "req_id": 1 00:21:41.351 } 00:21:41.351 Got JSON-RPC error response 00:21:41.351 response: 00:21:41.351 { 00:21:41.351 "code": -5, 00:21:41.351 "message": "Input/output error" 00:21:41.351 } 00:21:41.351 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:41.351 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.351 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.351 15:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.351 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.351 15:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.610 00:21:41.610 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:41.610 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:41.610 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.867 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.868 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.868 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1113008 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1113008 ']' 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1113008 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113008 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113008' 00:21:42.162 killing process with pid 1113008 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1113008 00:21:42.162 15:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1113008 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.437 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.437 rmmod nvme_tcp 00:21:42.437 rmmod nvme_fabrics 00:21:42.437 rmmod nvme_keyring 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1135267 ']' 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1135267 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1135267 ']' 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1135267 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1135267 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1135267' 00:21:42.695 killing process with pid 1135267 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1135267 00:21:42.695 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1135267 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.953 15:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.854 15:33:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.854 15:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6ik /tmp/spdk.key-sha256.hDX /tmp/spdk.key-sha384.wMS /tmp/spdk.key-sha512.oUs /tmp/spdk.key-sha512.Frt /tmp/spdk.key-sha384.4Vx /tmp/spdk.key-sha256.SQj '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:44.854 00:21:44.854 real 3m9.697s 00:21:44.854 user 7m21.490s 00:21:44.854 sys 0m25.032s 00:21:44.854 15:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.854 15:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.854 ************************************ 00:21:44.854 END TEST nvmf_auth_target 00:21:44.854 ************************************ 00:21:44.854 15:33:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:44.854 15:33:15 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:44.854 15:33:15 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:44.854 15:33:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:44.854 15:33:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.854 15:33:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:44.854 ************************************ 00:21:44.854 START TEST nvmf_bdevio_no_huge 00:21:44.854 ************************************ 00:21:44.854 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:45.112 * Looking for test storage... 00:21:45.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.112 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.113 15:33:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:47.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:47.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.013 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:47.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:47.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.014 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.271 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.271 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:21:47.272 00:21:47.272 --- 10.0.0.2 ping statistics --- 00:21:47.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.272 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:21:47.272 00:21:47.272 --- 10.0.0.1 ping statistics --- 00:21:47.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.272 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1138466 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1138466 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1138466 ']' 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.272 15:33:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.272 [2024-07-13 15:33:17.868711] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:47.272 [2024-07-13 15:33:17.868791] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:47.272 [2024-07-13 15:33:17.918176] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.272 [2024-07-13 15:33:17.936554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.272 [2024-07-13 15:33:18.015101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.272 [2024-07-13 15:33:18.015179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.272 [2024-07-13 15:33:18.015193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.272 [2024-07-13 15:33:18.015204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.272 [2024-07-13 15:33:18.015214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.272 [2024-07-13 15:33:18.015305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:47.272 [2024-07-13 15:33:18.015369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:47.272 [2024-07-13 15:33:18.015503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:47.272 [2024-07-13 15:33:18.015506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 [2024-07-13 15:33:18.136732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 Malloc0 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:47.530 [2024-07-13 15:33:18.175031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:47.530 { 00:21:47.530 "params": { 00:21:47.530 "name": "Nvme$subsystem", 00:21:47.530 "trtype": "$TEST_TRANSPORT", 00:21:47.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.530 "adrfam": "ipv4", 00:21:47.530 "trsvcid": "$NVMF_PORT", 00:21:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.530 "hdgst": ${hdgst:-false}, 00:21:47.530 "ddgst": ${ddgst:-false} 00:21:47.530 }, 00:21:47.530 "method": "bdev_nvme_attach_controller" 00:21:47.530 } 00:21:47.530 EOF 00:21:47.530 )") 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:47.530 15:33:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:47.530 "params": { 00:21:47.530 "name": "Nvme1", 00:21:47.530 "trtype": "tcp", 00:21:47.530 "traddr": "10.0.0.2", 00:21:47.530 "adrfam": "ipv4", 00:21:47.530 "trsvcid": "4420", 00:21:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.530 "hdgst": false, 00:21:47.530 "ddgst": false 00:21:47.530 }, 00:21:47.530 "method": "bdev_nvme_attach_controller" 00:21:47.530 }' 00:21:47.530 [2024-07-13 15:33:18.222349] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:47.530 [2024-07-13 15:33:18.222421] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1138494 ] 00:21:47.530 [2024-07-13 15:33:18.262929] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.530 [2024-07-13 15:33:18.282759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.788 [2024-07-13 15:33:18.369803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.788 [2024-07-13 15:33:18.369851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.788 [2024-07-13 15:33:18.369854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.045 I/O targets: 00:21:48.045 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:48.045 00:21:48.045 00:21:48.045 CUnit - A unit testing framework for C - Version 2.1-3 00:21:48.045 http://cunit.sourceforge.net/ 00:21:48.045 00:21:48.045 00:21:48.045 Suite: bdevio tests on: Nvme1n1 00:21:48.045 Test: blockdev write read block ...passed 00:21:48.045 Test: blockdev write zeroes read block ...passed 00:21:48.045 Test: blockdev write zeroes read no split ...passed 00:21:48.303 Test: blockdev write zeroes read split ...passed 00:21:48.303 Test: blockdev write zeroes read split partial ...passed 00:21:48.303 Test: blockdev reset ...[2024-07-13 15:33:18.904348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.303 [2024-07-13 15:33:18.904469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ae330 (9): Bad file descriptor 00:21:48.303 [2024-07-13 15:33:18.999765] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:48.303 passed 00:21:48.303 Test: blockdev write read 8 blocks ...passed 00:21:48.303 Test: blockdev write read size > 128k ...passed 00:21:48.303 Test: blockdev write read invalid size ...passed 00:21:48.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:48.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:48.303 Test: blockdev write read max offset ...passed 00:21:48.558 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:48.558 Test: blockdev writev readv 8 blocks ...passed 00:21:48.558 Test: blockdev writev readv 30 x 1block ...passed 00:21:48.558 Test: blockdev writev readv block ...passed 00:21:48.559 Test: blockdev writev readv size > 128k ...passed 00:21:48.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:48.559 Test: blockdev comparev and writev ...[2024-07-13 15:33:19.258025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.258061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.258092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.258111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.258463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.258487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.258509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.258883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.258907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.258935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.258951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.259298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.259322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:48.559 [2024-07-13 15:33:19.259343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:48.559 [2024-07-13 15:33:19.259360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:48.559 passed 00:21:48.816 Test: blockdev nvme passthru rw ...passed 00:21:48.816 Test: blockdev nvme passthru vendor specific ...[2024-07-13 15:33:19.342206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.816 [2024-07-13 15:33:19.342233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:48.816 [2024-07-13 15:33:19.342424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.816 [2024-07-13 15:33:19.342447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:48.816 [2024-07-13 15:33:19.342631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.816 [2024-07-13 15:33:19.342653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:48.816 [2024-07-13 15:33:19.342836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:48.816 [2024-07-13 15:33:19.342877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:48.816 passed 00:21:48.816 Test: blockdev nvme admin passthru ...passed 00:21:48.816 Test: blockdev copy ...passed 00:21:48.816 00:21:48.816 Run Summary: Type Total Ran Passed Failed Inactive 00:21:48.816 suites 1 1 n/a 0 0 00:21:48.816 tests 23 23 23 0 0 00:21:48.816 asserts 152 152 152 0 n/a 00:21:48.816 00:21:48.816 Elapsed time = 1.408 seconds 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.074 rmmod nvme_tcp 00:21:49.074 rmmod nvme_fabrics 00:21:49.074 rmmod nvme_keyring 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1138466 ']' 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1138466 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1138466 ']' 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1138466 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1138466 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1138466' 00:21:49.074 killing process with pid 1138466 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1138466 00:21:49.074 15:33:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1138466 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.660 15:33:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.556 15:33:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.556 00:21:51.556 real 0m6.635s 00:21:51.556 user 0m11.729s 00:21:51.556 sys 0m2.511s 00:21:51.556 15:33:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.556 15:33:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:51.556 ************************************ 00:21:51.556 END TEST nvmf_bdevio_no_huge 00:21:51.556 ************************************ 00:21:51.556 15:33:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:51.556 15:33:22 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:51.556 15:33:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.556 15:33:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.556 15:33:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.556 ************************************ 00:21:51.556 START TEST nvmf_tls 00:21:51.556 ************************************ 00:21:51.556 15:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:51.556 * Looking for test storage... 00:21:51.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.556 15:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.556 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:51.556 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.556 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.556 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.814 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.815 15:33:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:53.711 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:53.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:53.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:53.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:53.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.712 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:21:53.970 00:21:53.970 --- 10.0.0.2 ping statistics --- 00:21:53.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.970 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:53.970 00:21:53.970 --- 10.0.0.1 ping statistics --- 00:21:53.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.970 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1140679 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1140679 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1140679 ']' 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.970 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.970 [2024-07-13 15:33:24.605260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:21:53.970 [2024-07-13 15:33:24.605342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.970 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.970 [2024-07-13 15:33:24.650652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:53.970 [2024-07-13 15:33:24.681242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.228 [2024-07-13 15:33:24.775313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.228 [2024-07-13 15:33:24.775362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.228 [2024-07-13 15:33:24.775387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.228 [2024-07-13 15:33:24.775399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.228 [2024-07-13 15:33:24.775410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.228 [2024-07-13 15:33:24.775435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:54.228 15:33:24 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:54.485 true 00:21:54.485 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:54.485 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:54.742 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:54.742 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:54.742 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:54.999 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:54.999 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:55.255 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:55.255 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:55.256 15:33:25 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:55.513 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.513 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:55.771 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:55.771 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:55.771 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:55.771 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:56.029 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:56.029 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:56.029 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:56.286 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.286 15:33:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:56.543 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:56.543 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:56.543 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:56.801 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:56.801 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ptPptYhALS 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.81pk09EtlG 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ptPptYhALS 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.81pk09EtlG 00:21:57.059 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:57.339 15:33:27 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:57.608 15:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ptPptYhALS 00:21:57.608 15:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ptPptYhALS 00:21:57.608 15:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:57.887 [2024-07-13 15:33:28.592455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.887 15:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.145 15:33:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.402 [2024-07-13 15:33:29.125917] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.402 [2024-07-13 15:33:29.126181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.402 15:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:58.659 malloc0 00:21:58.660 15:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:58.917 15:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ptPptYhALS 00:21:59.174 [2024-07-13 15:33:29.891446] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:59.174 15:33:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ptPptYhALS 00:21:59.174 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.405 Initializing NVMe Controllers 00:22:11.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:11.405 Initialization complete. Launching workers. 00:22:11.405 ======================================================== 00:22:11.405 Latency(us) 00:22:11.405 Device Information : IOPS MiB/s Average min max 00:22:11.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7667.57 29.95 8349.80 1180.93 10164.33 00:22:11.405 ======================================================== 00:22:11.405 Total : 7667.57 29.95 8349.80 1180.93 10164.33 00:22:11.405 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ptPptYhALS 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ptPptYhALS' 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1142471 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1142471 /var/tmp/bdevperf.sock 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1142471 ']' 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.406 [2024-07-13 15:33:40.062027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:11.406 [2024-07-13 15:33:40.062101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142471 ] 00:22:11.406 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.406 [2024-07-13 15:33:40.098927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:11.406 [2024-07-13 15:33:40.126443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.406 [2024-07-13 15:33:40.211927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ptPptYhALS 00:22:11.406 [2024-07-13 15:33:40.585903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.406 [2024-07-13 15:33:40.586010] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:11.406 TLSTESTn1 00:22:11.406 15:33:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.406 Running I/O for 10 seconds... 00:22:21.370 00:22:21.370 Latency(us) 00:22:21.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.370 Verification LBA range: start 0x0 length 0x2000 00:22:21.370 TLSTESTn1 : 10.04 1264.51 4.94 0.00 0.00 101046.13 13010.11 98255.45 00:22:21.370 =================================================================================================================== 00:22:21.370 Total : 1264.51 4.94 0.00 0.00 101046.13 13010.11 98255.45 00:22:21.370 0 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1142471 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1142471 ']' 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1142471 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1142471 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1142471' 00:22:21.370 killing process with pid 1142471 00:22:21.370 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1142471 00:22:21.370 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.370 00:22:21.370 Latency(us) 00:22:21.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.371 =================================================================================================================== 00:22:21.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.371 [2024-07-13 15:33:50.902272] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.371 15:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1142471 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81pk09EtlG 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81pk09EtlG 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.81pk09EtlG 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.81pk09EtlG' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1143778 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1143778 /var/tmp/bdevperf.sock 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1143778 ']' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.371 [2024-07-13 15:33:51.176137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:21.371 [2024-07-13 15:33:51.176232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143778 ] 00:22:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.371 [2024-07-13 15:33:51.208071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.371 [2024-07-13 15:33:51.235315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.371 [2024-07-13 15:33:51.316127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.81pk09EtlG 00:22:21.371 [2024-07-13 15:33:51.657998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.371 [2024-07-13 15:33:51.658119] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:21.371 [2024-07-13 15:33:51.663788] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:21.371 [2024-07-13 15:33:51.664248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11678d0 (107): Transport endpoint is not connected 00:22:21.371 [2024-07-13 15:33:51.665220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11678d0 (9): Bad file descriptor 00:22:21.371 [2024-07-13 15:33:51.666219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:21.371 [2024-07-13 15:33:51.666241] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:21.371 [2024-07-13 15:33:51.666268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:21.371 request: 00:22:21.371 { 00:22:21.371 "name": "TLSTEST", 00:22:21.371 "trtype": "tcp", 00:22:21.371 "traddr": "10.0.0.2", 00:22:21.371 "adrfam": "ipv4", 00:22:21.371 "trsvcid": "4420", 00:22:21.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.371 "prchk_reftag": false, 00:22:21.371 "prchk_guard": false, 00:22:21.371 "hdgst": false, 00:22:21.371 "ddgst": false, 00:22:21.371 "psk": "/tmp/tmp.81pk09EtlG", 00:22:21.371 "method": "bdev_nvme_attach_controller", 00:22:21.371 "req_id": 1 00:22:21.371 } 00:22:21.371 Got JSON-RPC error response 00:22:21.371 response: 00:22:21.371 { 00:22:21.371 "code": -5, 00:22:21.371 "message": "Input/output error" 00:22:21.371 } 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1143778 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1143778 ']' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1143778 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143778 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143778' 00:22:21.371 killing process with pid 1143778 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1143778 00:22:21.371 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.371 00:22:21.371 Latency(us) 00:22:21.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.371 =================================================================================================================== 00:22:21.371 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:21.371 [2024-07-13 15:33:51.712519] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1143778 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ptPptYhALS 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ptPptYhALS 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ptPptYhALS 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ptPptYhALS' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1143919 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1143919 /var/tmp/bdevperf.sock 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1143919 ']' 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.371 15:33:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.371 [2024-07-13 15:33:51.948875] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:21.371 [2024-07-13 15:33:51.948980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143919 ] 00:22:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.371 [2024-07-13 15:33:51.981219] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.371 [2024-07-13 15:33:52.008594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.371 [2024-07-13 15:33:52.093493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.629 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.629 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:21.629 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ptPptYhALS 00:22:21.887 [2024-07-13 15:33:52.405855] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.887 [2024-07-13 15:33:52.405997] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:21.887 [2024-07-13 15:33:52.415569] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:21.887 [2024-07-13 15:33:52.415606] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:21.887 [2024-07-13 15:33:52.415660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:21.887 [2024-07-13 15:33:52.415777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dad8d0 (107): Transport endpoint is not connected 00:22:21.887 [2024-07-13 15:33:52.416767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dad8d0 (9): Bad file descriptor 00:22:21.887 [2024-07-13 15:33:52.417767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:21.887 [2024-07-13 15:33:52.417788] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:21.887 [2024-07-13 15:33:52.417805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:21.887 request: 00:22:21.887 { 00:22:21.887 "name": "TLSTEST", 00:22:21.887 "trtype": "tcp", 00:22:21.887 "traddr": "10.0.0.2", 00:22:21.887 "adrfam": "ipv4", 00:22:21.887 "trsvcid": "4420", 00:22:21.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.887 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:21.887 "prchk_reftag": false, 00:22:21.887 "prchk_guard": false, 00:22:21.887 "hdgst": false, 00:22:21.887 "ddgst": false, 00:22:21.887 "psk": "/tmp/tmp.ptPptYhALS", 00:22:21.887 "method": "bdev_nvme_attach_controller", 00:22:21.887 "req_id": 1 00:22:21.887 } 00:22:21.887 Got JSON-RPC error response 00:22:21.887 response: 00:22:21.887 { 00:22:21.887 "code": -5, 00:22:21.887 "message": "Input/output error" 00:22:21.887 } 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1143919 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1143919 ']' 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1143919 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143919 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143919' 00:22:21.887 killing process with pid 1143919 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1143919 00:22:21.887 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.887 00:22:21.887 Latency(us) 00:22:21.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.887 =================================================================================================================== 00:22:21.887 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:21.887 [2024-07-13 15:33:52.469686] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.887 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1143919 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ptPptYhALS 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ptPptYhALS 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ptPptYhALS 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ptPptYhALS' 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1143935 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1143935 /var/tmp/bdevperf.sock 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1143935 ']' 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.145 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.145 [2024-07-13 15:33:52.726446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:22.146 [2024-07-13 15:33:52.726530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143935 ] 00:22:22.146 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.146 [2024-07-13 15:33:52.758684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.146 [2024-07-13 15:33:52.787030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.146 [2024-07-13 15:33:52.872682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.403 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.403 15:33:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:22.403 15:33:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ptPptYhALS 00:22:22.660 [2024-07-13 15:33:53.200310] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.660 [2024-07-13 15:33:53.200434] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:22.660 [2024-07-13 15:33:53.207603] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:22.660 [2024-07-13 15:33:53.207634] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:22.660 [2024-07-13 15:33:53.207676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:22.660 [2024-07-13 15:33:53.208485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf158d0 (107): Transport endpoint is not connected 00:22:22.660 [2024-07-13 15:33:53.209461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf158d0 (9): Bad file descriptor 00:22:22.660 [2024-07-13 15:33:53.210460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:22.660 [2024-07-13 15:33:53.210480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:22.660 [2024-07-13 15:33:53.210497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:22.660 request: 00:22:22.660 { 00:22:22.660 "name": "TLSTEST", 00:22:22.660 "trtype": "tcp", 00:22:22.660 "traddr": "10.0.0.2", 00:22:22.660 "adrfam": "ipv4", 00:22:22.660 "trsvcid": "4420", 00:22:22.660 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:22.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.660 "prchk_reftag": false, 00:22:22.660 "prchk_guard": false, 00:22:22.660 "hdgst": false, 00:22:22.660 "ddgst": false, 00:22:22.660 "psk": "/tmp/tmp.ptPptYhALS", 00:22:22.660 "method": "bdev_nvme_attach_controller", 00:22:22.660 "req_id": 1 00:22:22.660 } 00:22:22.660 Got JSON-RPC error response 00:22:22.660 response: 00:22:22.660 { 00:22:22.660 "code": -5, 00:22:22.660 "message": "Input/output error" 00:22:22.660 } 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1143935 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1143935 ']' 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1143935 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143935 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143935' 00:22:22.660 killing process with pid 1143935 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1143935 00:22:22.660 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.660 00:22:22.660 Latency(us) 00:22:22.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.660 =================================================================================================================== 00:22:22.660 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.660 [2024-07-13 15:33:53.253344] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.660 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1143935 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1144069 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1144069 /var/tmp/bdevperf.sock 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1144069 ']' 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.918 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.918 [2024-07-13 15:33:53.486195] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:22.918 [2024-07-13 15:33:53.486290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144069 ] 00:22:22.918 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.918 [2024-07-13 15:33:53.518160] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.918 [2024-07-13 15:33:53.545646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.918 [2024-07-13 15:33:53.631498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.175 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.175 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:23.175 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:23.433 [2024-07-13 15:33:53.979715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:23.433 [2024-07-13 15:33:53.981673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1ede0 (9): Bad file descriptor 00:22:23.433 [2024-07-13 15:33:53.982668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:23.433 [2024-07-13 15:33:53.982690] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:23.433 [2024-07-13 15:33:53.982707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.433 request: 00:22:23.433 { 00:22:23.433 "name": "TLSTEST", 00:22:23.433 "trtype": "tcp", 00:22:23.433 "traddr": "10.0.0.2", 00:22:23.433 "adrfam": "ipv4", 00:22:23.433 "trsvcid": "4420", 00:22:23.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.433 "prchk_reftag": false, 00:22:23.433 "prchk_guard": false, 00:22:23.433 "hdgst": false, 00:22:23.433 "ddgst": false, 00:22:23.433 "method": "bdev_nvme_attach_controller", 00:22:23.433 "req_id": 1 00:22:23.433 } 00:22:23.433 Got JSON-RPC error response 00:22:23.433 response: 00:22:23.433 { 00:22:23.433 "code": -5, 00:22:23.433 "message": "Input/output error" 00:22:23.433 } 00:22:23.433 15:33:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1144069 00:22:23.434 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1144069 ']' 00:22:23.434 15:33:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1144069 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144069 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144069' 00:22:23.434 killing process with pid 1144069 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1144069 00:22:23.434 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.434 00:22:23.434 Latency(us) 00:22:23.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.434 =================================================================================================================== 00:22:23.434 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.434 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1144069 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1140679 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1140679 ']' 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1140679 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140679 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140679' 00:22:23.692 killing process with pid 1140679 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1140679 00:22:23.692 [2024-07-13 15:33:54.279975] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:23.692 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1140679 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lWhTPabYLA 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lWhTPabYLA 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1144215 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1144215 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1144215 ']' 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.950 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.950 [2024-07-13 15:33:54.643432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:23.950 [2024-07-13 15:33:54.643539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.950 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.950 [2024-07-13 15:33:54.681300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:23.950 [2024-07-13 15:33:54.713325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.207 [2024-07-13 15:33:54.801313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.207 [2024-07-13 15:33:54.801381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.207 [2024-07-13 15:33:54.801407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.207 [2024-07-13 15:33:54.801422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.207 [2024-07-13 15:33:54.801435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.207 [2024-07-13 15:33:54.801465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lWhTPabYLA 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lWhTPabYLA 00:22:24.207 15:33:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.465 [2024-07-13 15:33:55.178046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.465 15:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:24.723 15:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:24.981 [2024-07-13 15:33:55.703505] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.981 [2024-07-13 15:33:55.703745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.981 15:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.239 malloc0 00:22:25.239 15:33:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:25.497 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:25.755 [2024-07-13 15:33:56.445728] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWhTPabYLA 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lWhTPabYLA' 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1144499 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1144499 /var/tmp/bdevperf.sock 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1144499 ']' 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.755 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.755 [2024-07-13 15:33:56.502484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:25.755 [2024-07-13 15:33:56.502555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144499 ] 00:22:26.014 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.014 [2024-07-13 15:33:56.535135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:26.014 [2024-07-13 15:33:56.561083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.014 [2024-07-13 15:33:56.644061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.014 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.014 15:33:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:26.014 15:33:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:26.272 [2024-07-13 15:33:56.968759] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.272 [2024-07-13 15:33:56.968872] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:26.530 TLSTESTn1 00:22:26.530 15:33:57 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:26.530 Running I/O for 10 seconds... 00:22:36.522 00:22:36.522 Latency(us) 00:22:36.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:36.522 Verification LBA range: start 0x0 length 0x2000 00:22:36.522 TLSTESTn1 : 10.05 2473.08 9.66 0.00 0.00 51618.32 6310.87 82721.00 00:22:36.522 =================================================================================================================== 00:22:36.522 Total : 2473.08 9.66 0.00 0.00 51618.32 6310.87 82721.00 00:22:36.522 0 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1144499 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1144499 ']' 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1144499 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144499 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144499' 00:22:36.522 killing process with pid 1144499 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1144499 00:22:36.522 Received shutdown signal, test time was about 10.000000 seconds 00:22:36.522 00:22:36.522 Latency(us) 00:22:36.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.522 =================================================================================================================== 00:22:36.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.522 [2024-07-13 15:34:07.276989] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:36.522 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1144499 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lWhTPabYLA 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWhTPabYLA 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWhTPabYLA 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWhTPabYLA 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lWhTPabYLA' 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1145796 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1145796 /var/tmp/bdevperf.sock 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1145796 ']' 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.779 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.779 [2024-07-13 15:34:07.543904] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:36.779 [2024-07-13 15:34:07.543997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145796 ] 00:22:37.037 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.037 [2024-07-13 15:34:07.577613] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:37.037 [2024-07-13 15:34:07.606454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.037 [2024-07-13 15:34:07.691972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.037 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.037 15:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.037 15:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:37.602 [2024-07-13 15:34:08.070757] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.602 [2024-07-13 15:34:08.070835] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:37.602 [2024-07-13 15:34:08.070850] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lWhTPabYLA 00:22:37.602 request: 00:22:37.602 { 00:22:37.602 "name": "TLSTEST", 00:22:37.602 "trtype": "tcp", 00:22:37.602 "traddr": "10.0.0.2", 00:22:37.602 "adrfam": "ipv4", 00:22:37.602 "trsvcid": "4420", 00:22:37.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.602 "prchk_reftag": false, 00:22:37.602 "prchk_guard": false, 00:22:37.602 "hdgst": false, 00:22:37.602 "ddgst": false, 00:22:37.602 "psk": "/tmp/tmp.lWhTPabYLA", 00:22:37.602 "method": "bdev_nvme_attach_controller", 00:22:37.602 "req_id": 1 00:22:37.602 } 00:22:37.602 Got JSON-RPC error response 00:22:37.602 response: 00:22:37.602 { 00:22:37.602 "code": -1, 00:22:37.602 "message": "Operation not permitted" 00:22:37.602 } 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1145796 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1145796 ']' 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1145796 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145796 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145796' 00:22:37.602 killing process with pid 1145796 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1145796 00:22:37.602 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.602 00:22:37.602 Latency(us) 00:22:37.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.602 =================================================================================================================== 00:22:37.602 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1145796 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1144215 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1144215 ']' 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1144215 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144215 00:22:37.602 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:37.859 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:37.859 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144215' 00:22:37.859 killing process with pid 1144215 00:22:37.859 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1144215 00:22:37.859 [2024-07-13 15:34:08.368180] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.859 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1144215 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1145922 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1145922 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1145922 ']' 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.860 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.117 [2024-07-13 15:34:08.669210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:38.117 [2024-07-13 15:34:08.669297] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.117 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.117 [2024-07-13 15:34:08.707995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.117 [2024-07-13 15:34:08.739947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.117 [2024-07-13 15:34:08.828705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.117 [2024-07-13 15:34:08.828770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.117 [2024-07-13 15:34:08.828797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.117 [2024-07-13 15:34:08.828811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.117 [2024-07-13 15:34:08.828823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.117 [2024-07-13 15:34:08.828851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lWhTPabYLA 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lWhTPabYLA 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.lWhTPabYLA 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lWhTPabYLA 00:22:38.375 15:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:38.632 [2024-07-13 15:34:09.246021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.632 15:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:38.889 15:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:39.146 [2024-07-13 15:34:09.715264] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.146 [2024-07-13 15:34:09.715502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.146 15:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:39.403 malloc0 00:22:39.403 15:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:39.661 15:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:39.919 [2024-07-13 15:34:10.457103] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:39.919 [2024-07-13 15:34:10.457148] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:39.919 [2024-07-13 15:34:10.457188] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:39.919 request: 00:22:39.919 { 00:22:39.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.919 "host": "nqn.2016-06.io.spdk:host1", 00:22:39.919 "psk": "/tmp/tmp.lWhTPabYLA", 00:22:39.919 "method": "nvmf_subsystem_add_host", 00:22:39.919 "req_id": 1 00:22:39.919 } 00:22:39.919 Got JSON-RPC error response 00:22:39.919 response: 00:22:39.919 { 00:22:39.919 "code": -32603, 00:22:39.919 "message": "Internal error" 00:22:39.919 } 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1145922 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1145922 ']' 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1145922 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145922 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145922' 00:22:39.919 killing process with pid 1145922 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1145922 00:22:39.919 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1145922 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lWhTPabYLA 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1146133 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1146133 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1146133 ']' 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.177 15:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.177 [2024-07-13 15:34:10.823704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:40.177 [2024-07-13 15:34:10.823802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.177 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.177 [2024-07-13 15:34:10.861636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:40.177 [2024-07-13 15:34:10.894235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.435 [2024-07-13 15:34:10.991256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.435 [2024-07-13 15:34:10.991313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.435 [2024-07-13 15:34:10.991339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.435 [2024-07-13 15:34:10.991351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.435 [2024-07-13 15:34:10.991361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.435 [2024-07-13 15:34:10.991393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lWhTPabYLA 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lWhTPabYLA 00:22:40.435 15:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:40.693 [2024-07-13 15:34:11.346068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.693 15:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.950 15:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:41.207 [2024-07-13 15:34:11.815293] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.207 [2024-07-13 15:34:11.815523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.207 15:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:41.465 malloc0 00:22:41.465 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:41.721 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:41.978 [2024-07-13 15:34:12.561311] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1146415 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1146415 /var/tmp/bdevperf.sock 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1146415 ']' 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.978 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.978 [2024-07-13 15:34:12.623349] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:41.978 [2024-07-13 15:34:12.623435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146415 ] 00:22:41.978 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.978 [2024-07-13 15:34:12.656728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:41.978 [2024-07-13 15:34:12.684685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.237 [2024-07-13 15:34:12.771629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.237 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.237 15:34:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:42.237 15:34:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:42.494 [2024-07-13 15:34:13.115453] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.494 [2024-07-13 15:34:13.115566] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.494 TLSTESTn1 00:22:42.494 15:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:43.059 15:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:43.059 "subsystems": [ 00:22:43.059 { 00:22:43.059 "subsystem": "keyring", 00:22:43.059 "config": [] 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "subsystem": "iobuf", 00:22:43.059 "config": [ 00:22:43.059 { 00:22:43.059 "method": "iobuf_set_options", 00:22:43.059 "params": { 00:22:43.059 "small_pool_count": 8192, 00:22:43.059 "large_pool_count": 1024, 00:22:43.059 "small_bufsize": 8192, 00:22:43.059 "large_bufsize": 135168 00:22:43.059 } 00:22:43.059 } 00:22:43.059 ] 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "subsystem": "sock", 00:22:43.059 "config": [ 00:22:43.059 { 00:22:43.059 "method": "sock_set_default_impl", 00:22:43.059 "params": { 00:22:43.059 "impl_name": "posix" 00:22:43.059 } 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "method": "sock_impl_set_options", 00:22:43.059 "params": { 00:22:43.059 "impl_name": "ssl", 00:22:43.059 "recv_buf_size": 4096, 00:22:43.059 "send_buf_size": 4096, 00:22:43.059 "enable_recv_pipe": true, 00:22:43.059 "enable_quickack": false, 00:22:43.059 "enable_placement_id": 0, 00:22:43.059 "enable_zerocopy_send_server": true, 00:22:43.059 "enable_zerocopy_send_client": false, 00:22:43.059 "zerocopy_threshold": 0, 00:22:43.059 "tls_version": 0, 00:22:43.059 "enable_ktls": false 00:22:43.059 } 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "method": "sock_impl_set_options", 00:22:43.059 "params": { 00:22:43.059 "impl_name": "posix", 00:22:43.059 "recv_buf_size": 2097152, 00:22:43.059 "send_buf_size": 2097152, 00:22:43.059 "enable_recv_pipe": true, 00:22:43.059 "enable_quickack": false, 00:22:43.059 "enable_placement_id": 0, 00:22:43.059 "enable_zerocopy_send_server": true, 00:22:43.059 "enable_zerocopy_send_client": false, 00:22:43.059 "zerocopy_threshold": 0, 00:22:43.059 "tls_version": 0, 00:22:43.059 "enable_ktls": false 00:22:43.059 } 00:22:43.059 } 00:22:43.059 ] 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "subsystem": "vmd", 00:22:43.059 "config": [] 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "subsystem": "accel", 00:22:43.059 "config": [ 00:22:43.059 { 00:22:43.059 "method": "accel_set_options", 00:22:43.059 "params": { 00:22:43.059 "small_cache_size": 128, 00:22:43.059 "large_cache_size": 16, 00:22:43.059 "task_count": 2048, 00:22:43.059 "sequence_count": 2048, 00:22:43.059 "buf_count": 2048 00:22:43.059 } 00:22:43.059 } 00:22:43.059 ] 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "subsystem": "bdev", 00:22:43.059 "config": [ 00:22:43.059 { 00:22:43.059 "method": "bdev_set_options", 00:22:43.059 "params": { 00:22:43.059 "bdev_io_pool_size": 65535, 00:22:43.059 "bdev_io_cache_size": 256, 00:22:43.059 "bdev_auto_examine": true, 00:22:43.059 "iobuf_small_cache_size": 128, 00:22:43.059 "iobuf_large_cache_size": 16 00:22:43.059 } 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "method": "bdev_raid_set_options", 00:22:43.059 "params": { 00:22:43.059 "process_window_size_kb": 1024 00:22:43.059 } 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "method": "bdev_iscsi_set_options", 00:22:43.059 "params": { 00:22:43.059 "timeout_sec": 30 00:22:43.059 } 00:22:43.059 }, 00:22:43.059 { 00:22:43.059 "method": "bdev_nvme_set_options", 00:22:43.059 "params": { 00:22:43.059 "action_on_timeout": "none", 00:22:43.059 "timeout_us": 0, 00:22:43.059 "timeout_admin_us": 0, 00:22:43.060 "keep_alive_timeout_ms": 10000, 00:22:43.060 "arbitration_burst": 0, 00:22:43.060 "low_priority_weight": 0, 00:22:43.060 "medium_priority_weight": 0, 00:22:43.060 "high_priority_weight": 0, 00:22:43.060 "nvme_adminq_poll_period_us": 10000, 00:22:43.060 "nvme_ioq_poll_period_us": 0, 00:22:43.060 "io_queue_requests": 0, 00:22:43.060 "delay_cmd_submit": true, 00:22:43.060 "transport_retry_count": 4, 00:22:43.060 "bdev_retry_count": 3, 00:22:43.060 "transport_ack_timeout": 0, 00:22:43.060 "ctrlr_loss_timeout_sec": 0, 00:22:43.060 "reconnect_delay_sec": 0, 00:22:43.060 "fast_io_fail_timeout_sec": 0, 00:22:43.060 "disable_auto_failback": false, 00:22:43.060 "generate_uuids": false, 00:22:43.060 "transport_tos": 0, 00:22:43.060 "nvme_error_stat": false, 00:22:43.060 "rdma_srq_size": 0, 00:22:43.060 "io_path_stat": false, 00:22:43.060 "allow_accel_sequence": false, 00:22:43.060 "rdma_max_cq_size": 0, 00:22:43.060 "rdma_cm_event_timeout_ms": 0, 00:22:43.060 "dhchap_digests": [ 00:22:43.060 "sha256", 00:22:43.060 "sha384", 00:22:43.060 "sha512" 00:22:43.060 ], 00:22:43.060 "dhchap_dhgroups": [ 00:22:43.060 "null", 00:22:43.060 "ffdhe2048", 00:22:43.060 "ffdhe3072", 00:22:43.060 "ffdhe4096", 00:22:43.060 "ffdhe6144", 00:22:43.060 "ffdhe8192" 00:22:43.060 ] 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "bdev_nvme_set_hotplug", 00:22:43.060 "params": { 00:22:43.060 "period_us": 100000, 00:22:43.060 "enable": false 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "bdev_malloc_create", 00:22:43.060 "params": { 00:22:43.060 "name": "malloc0", 00:22:43.060 "num_blocks": 8192, 00:22:43.060 "block_size": 4096, 00:22:43.060 "physical_block_size": 4096, 00:22:43.060 "uuid": "f95493ba-c348-40c4-8071-859e82366be6", 00:22:43.060 "optimal_io_boundary": 0 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "bdev_wait_for_examine" 00:22:43.060 } 00:22:43.060 ] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "nbd", 00:22:43.060 "config": [] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "scheduler", 00:22:43.060 "config": [ 00:22:43.060 { 00:22:43.060 "method": "framework_set_scheduler", 00:22:43.060 "params": { 00:22:43.060 "name": "static" 00:22:43.060 } 00:22:43.060 } 00:22:43.060 ] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "nvmf", 00:22:43.060 "config": [ 00:22:43.060 { 00:22:43.060 "method": "nvmf_set_config", 00:22:43.060 "params": { 00:22:43.060 "discovery_filter": "match_any", 00:22:43.060 "admin_cmd_passthru": { 00:22:43.060 "identify_ctrlr": false 00:22:43.060 } 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_set_max_subsystems", 00:22:43.060 "params": { 00:22:43.060 "max_subsystems": 1024 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_set_crdt", 00:22:43.060 "params": { 00:22:43.060 "crdt1": 0, 00:22:43.060 "crdt2": 0, 00:22:43.060 "crdt3": 0 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_create_transport", 00:22:43.060 "params": { 00:22:43.060 "trtype": "TCP", 00:22:43.060 "max_queue_depth": 128, 00:22:43.060 "max_io_qpairs_per_ctrlr": 127, 00:22:43.060 "in_capsule_data_size": 4096, 00:22:43.060 "max_io_size": 131072, 00:22:43.060 "io_unit_size": 131072, 00:22:43.060 "max_aq_depth": 128, 00:22:43.060 "num_shared_buffers": 511, 00:22:43.060 "buf_cache_size": 4294967295, 00:22:43.060 "dif_insert_or_strip": false, 00:22:43.060 "zcopy": false, 00:22:43.060 "c2h_success": false, 00:22:43.060 "sock_priority": 0, 00:22:43.060 "abort_timeout_sec": 1, 00:22:43.060 "ack_timeout": 0, 00:22:43.060 "data_wr_pool_size": 0 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_create_subsystem", 00:22:43.060 "params": { 00:22:43.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.060 "allow_any_host": false, 00:22:43.060 "serial_number": "SPDK00000000000001", 00:22:43.060 "model_number": "SPDK bdev Controller", 00:22:43.060 "max_namespaces": 10, 00:22:43.060 "min_cntlid": 1, 00:22:43.060 "max_cntlid": 65519, 00:22:43.060 "ana_reporting": false 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_subsystem_add_host", 00:22:43.060 "params": { 00:22:43.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.060 "host": "nqn.2016-06.io.spdk:host1", 00:22:43.060 "psk": "/tmp/tmp.lWhTPabYLA" 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_subsystem_add_ns", 00:22:43.060 "params": { 00:22:43.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.060 "namespace": { 00:22:43.060 "nsid": 1, 00:22:43.060 "bdev_name": "malloc0", 00:22:43.060 "nguid": "F95493BAC34840C48071859E82366BE6", 00:22:43.060 "uuid": "f95493ba-c348-40c4-8071-859e82366be6", 00:22:43.060 "no_auto_visible": false 00:22:43.060 } 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "nvmf_subsystem_add_listener", 00:22:43.060 "params": { 00:22:43.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.060 "listen_address": { 00:22:43.060 "trtype": "TCP", 00:22:43.060 "adrfam": "IPv4", 00:22:43.060 "traddr": "10.0.0.2", 00:22:43.060 "trsvcid": "4420" 00:22:43.060 }, 00:22:43.060 "secure_channel": true 00:22:43.060 } 00:22:43.060 } 00:22:43.060 ] 00:22:43.060 } 00:22:43.060 ] 00:22:43.060 }' 00:22:43.060 15:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:43.060 15:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:43.060 "subsystems": [ 00:22:43.060 { 00:22:43.060 "subsystem": "keyring", 00:22:43.060 "config": [] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "iobuf", 00:22:43.060 "config": [ 00:22:43.060 { 00:22:43.060 "method": "iobuf_set_options", 00:22:43.060 "params": { 00:22:43.060 "small_pool_count": 8192, 00:22:43.060 "large_pool_count": 1024, 00:22:43.060 "small_bufsize": 8192, 00:22:43.060 "large_bufsize": 135168 00:22:43.060 } 00:22:43.060 } 00:22:43.060 ] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "sock", 00:22:43.060 "config": [ 00:22:43.060 { 00:22:43.060 "method": "sock_set_default_impl", 00:22:43.060 "params": { 00:22:43.060 "impl_name": "posix" 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "sock_impl_set_options", 00:22:43.060 "params": { 00:22:43.060 "impl_name": "ssl", 00:22:43.060 "recv_buf_size": 4096, 00:22:43.060 "send_buf_size": 4096, 00:22:43.060 "enable_recv_pipe": true, 00:22:43.060 "enable_quickack": false, 00:22:43.060 "enable_placement_id": 0, 00:22:43.060 "enable_zerocopy_send_server": true, 00:22:43.060 "enable_zerocopy_send_client": false, 00:22:43.060 "zerocopy_threshold": 0, 00:22:43.060 "tls_version": 0, 00:22:43.060 "enable_ktls": false 00:22:43.060 } 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "method": "sock_impl_set_options", 00:22:43.060 "params": { 00:22:43.060 "impl_name": "posix", 00:22:43.060 "recv_buf_size": 2097152, 00:22:43.060 "send_buf_size": 2097152, 00:22:43.060 "enable_recv_pipe": true, 00:22:43.060 "enable_quickack": false, 00:22:43.060 "enable_placement_id": 0, 00:22:43.060 "enable_zerocopy_send_server": true, 00:22:43.060 "enable_zerocopy_send_client": false, 00:22:43.060 "zerocopy_threshold": 0, 00:22:43.060 "tls_version": 0, 00:22:43.060 "enable_ktls": false 00:22:43.060 } 00:22:43.060 } 00:22:43.060 ] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "vmd", 00:22:43.060 "config": [] 00:22:43.060 }, 00:22:43.060 { 00:22:43.060 "subsystem": "accel", 00:22:43.060 "config": [ 00:22:43.060 { 00:22:43.060 "method": "accel_set_options", 00:22:43.060 "params": { 00:22:43.060 "small_cache_size": 128, 00:22:43.060 "large_cache_size": 16, 00:22:43.060 "task_count": 2048, 00:22:43.060 "sequence_count": 2048, 00:22:43.061 "buf_count": 2048 00:22:43.061 } 00:22:43.061 } 00:22:43.061 ] 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "subsystem": "bdev", 00:22:43.061 "config": [ 00:22:43.061 { 00:22:43.061 "method": "bdev_set_options", 00:22:43.061 "params": { 00:22:43.061 "bdev_io_pool_size": 65535, 00:22:43.061 "bdev_io_cache_size": 256, 00:22:43.061 "bdev_auto_examine": true, 00:22:43.061 "iobuf_small_cache_size": 128, 00:22:43.061 "iobuf_large_cache_size": 16 00:22:43.061 } 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "method": "bdev_raid_set_options", 00:22:43.061 "params": { 00:22:43.061 "process_window_size_kb": 1024 00:22:43.061 } 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "method": "bdev_iscsi_set_options", 00:22:43.061 "params": { 00:22:43.061 "timeout_sec": 30 00:22:43.061 } 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "method": "bdev_nvme_set_options", 00:22:43.061 "params": { 00:22:43.061 "action_on_timeout": "none", 00:22:43.061 "timeout_us": 0, 00:22:43.061 "timeout_admin_us": 0, 00:22:43.061 "keep_alive_timeout_ms": 10000, 00:22:43.061 "arbitration_burst": 0, 00:22:43.061 "low_priority_weight": 0, 00:22:43.061 "medium_priority_weight": 0, 00:22:43.061 "high_priority_weight": 0, 00:22:43.061 "nvme_adminq_poll_period_us": 10000, 00:22:43.061 "nvme_ioq_poll_period_us": 0, 00:22:43.061 "io_queue_requests": 512, 00:22:43.061 "delay_cmd_submit": true, 00:22:43.061 "transport_retry_count": 4, 00:22:43.061 "bdev_retry_count": 3, 00:22:43.061 "transport_ack_timeout": 0, 00:22:43.061 "ctrlr_loss_timeout_sec": 0, 00:22:43.061 "reconnect_delay_sec": 0, 00:22:43.061 "fast_io_fail_timeout_sec": 0, 00:22:43.061 "disable_auto_failback": false, 00:22:43.061 "generate_uuids": false, 00:22:43.061 "transport_tos": 0, 00:22:43.061 "nvme_error_stat": false, 00:22:43.061 "rdma_srq_size": 0, 00:22:43.061 "io_path_stat": false, 00:22:43.061 "allow_accel_sequence": false, 00:22:43.061 "rdma_max_cq_size": 0, 00:22:43.061 "rdma_cm_event_timeout_ms": 0, 00:22:43.061 "dhchap_digests": [ 00:22:43.061 "sha256", 00:22:43.061 "sha384", 00:22:43.061 "sha512" 00:22:43.061 ], 00:22:43.061 "dhchap_dhgroups": [ 00:22:43.061 "null", 00:22:43.061 "ffdhe2048", 00:22:43.061 "ffdhe3072", 00:22:43.061 "ffdhe4096", 00:22:43.061 "ffdhe6144", 00:22:43.061 "ffdhe8192" 00:22:43.061 ] 00:22:43.061 } 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "method": "bdev_nvme_attach_controller", 00:22:43.061 "params": { 00:22:43.061 "name": "TLSTEST", 00:22:43.061 "trtype": "TCP", 00:22:43.061 "adrfam": "IPv4", 00:22:43.061 "traddr": "10.0.0.2", 00:22:43.061 "trsvcid": "4420", 00:22:43.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.061 "prchk_reftag": false, 00:22:43.061 "prchk_guard": false, 00:22:43.061 "ctrlr_loss_timeout_sec": 0, 00:22:43.061 "reconnect_delay_sec": 0, 00:22:43.061 "fast_io_fail_timeout_sec": 0, 00:22:43.061 "psk": "/tmp/tmp.lWhTPabYLA", 00:22:43.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.061 "hdgst": false, 00:22:43.061 "ddgst": false 00:22:43.061 } 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "method": "bdev_nvme_set_hotplug", 00:22:43.061 "params": { 00:22:43.061 "period_us": 100000, 00:22:43.061 "enable": false 00:22:43.061 } 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "method": "bdev_wait_for_examine" 00:22:43.061 } 00:22:43.061 ] 00:22:43.061 }, 00:22:43.061 { 00:22:43.061 "subsystem": "nbd", 00:22:43.061 "config": [] 00:22:43.061 } 00:22:43.061 ] 00:22:43.061 }' 00:22:43.061 15:34:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1146415 00:22:43.061 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1146415 ']' 00:22:43.061 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1146415 00:22:43.061 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:43.061 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.320 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146415 00:22:43.320 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:43.320 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:43.320 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146415' 00:22:43.320 killing process with pid 1146415 00:22:43.320 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1146415 00:22:43.320 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.320 00:22:43.320 Latency(us) 00:22:43.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.320 =================================================================================================================== 00:22:43.320 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.321 [2024-07-13 15:34:13.850973] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:43.321 15:34:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1146415 00:22:43.321 15:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1146133 00:22:43.321 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1146133 ']' 00:22:43.321 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1146133 00:22:43.321 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:43.321 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.321 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146133 00:22:43.601 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:43.601 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:43.601 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146133' 00:22:43.601 killing process with pid 1146133 00:22:43.602 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1146133 00:22:43.602 [2024-07-13 15:34:14.104430] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.602 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1146133 00:22:43.602 15:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:43.602 15:34:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.602 15:34:14 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:43.602 "subsystems": [ 00:22:43.602 { 00:22:43.602 "subsystem": "keyring", 00:22:43.602 "config": [] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "iobuf", 00:22:43.602 "config": [ 00:22:43.602 { 00:22:43.602 "method": "iobuf_set_options", 00:22:43.602 "params": { 00:22:43.602 "small_pool_count": 8192, 00:22:43.602 "large_pool_count": 1024, 00:22:43.602 "small_bufsize": 8192, 00:22:43.602 "large_bufsize": 135168 00:22:43.602 } 00:22:43.602 } 00:22:43.602 ] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "sock", 00:22:43.602 "config": [ 00:22:43.602 { 00:22:43.602 "method": "sock_set_default_impl", 00:22:43.602 "params": { 00:22:43.602 "impl_name": "posix" 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "sock_impl_set_options", 00:22:43.602 "params": { 00:22:43.602 "impl_name": "ssl", 00:22:43.602 "recv_buf_size": 4096, 00:22:43.602 "send_buf_size": 4096, 00:22:43.602 "enable_recv_pipe": true, 00:22:43.602 "enable_quickack": false, 00:22:43.602 "enable_placement_id": 0, 00:22:43.602 "enable_zerocopy_send_server": true, 00:22:43.602 "enable_zerocopy_send_client": false, 00:22:43.602 "zerocopy_threshold": 0, 00:22:43.602 "tls_version": 0, 00:22:43.602 "enable_ktls": false 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "sock_impl_set_options", 00:22:43.602 "params": { 00:22:43.602 "impl_name": "posix", 00:22:43.602 "recv_buf_size": 2097152, 00:22:43.602 "send_buf_size": 2097152, 00:22:43.602 "enable_recv_pipe": true, 00:22:43.602 "enable_quickack": false, 00:22:43.602 "enable_placement_id": 0, 00:22:43.602 "enable_zerocopy_send_server": true, 00:22:43.602 "enable_zerocopy_send_client": false, 00:22:43.602 "zerocopy_threshold": 0, 00:22:43.602 "tls_version": 0, 00:22:43.602 "enable_ktls": false 00:22:43.602 } 00:22:43.602 } 00:22:43.602 ] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "vmd", 00:22:43.602 "config": [] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "accel", 00:22:43.602 "config": [ 00:22:43.602 { 00:22:43.602 "method": "accel_set_options", 00:22:43.602 "params": { 00:22:43.602 "small_cache_size": 128, 00:22:43.602 "large_cache_size": 16, 00:22:43.602 "task_count": 2048, 00:22:43.602 "sequence_count": 2048, 00:22:43.602 "buf_count": 2048 00:22:43.602 } 00:22:43.602 } 00:22:43.602 ] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "bdev", 00:22:43.602 "config": [ 00:22:43.602 { 00:22:43.602 "method": "bdev_set_options", 00:22:43.602 "params": { 00:22:43.602 "bdev_io_pool_size": 65535, 00:22:43.602 "bdev_io_cache_size": 256, 00:22:43.602 "bdev_auto_examine": true, 00:22:43.602 "iobuf_small_cache_size": 128, 00:22:43.602 "iobuf_large_cache_size": 16 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "bdev_raid_set_options", 00:22:43.602 "params": { 00:22:43.602 "process_window_size_kb": 1024 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "bdev_iscsi_set_options", 00:22:43.602 "params": { 00:22:43.602 "timeout_sec": 30 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "bdev_nvme_set_options", 00:22:43.602 "params": { 00:22:43.602 "action_on_timeout": "none", 00:22:43.602 "timeout_us": 0, 00:22:43.602 "timeout_admin_us": 0, 00:22:43.602 "keep_alive_timeout_ms": 10000, 00:22:43.602 "arbitration_burst": 0, 00:22:43.602 "low_priority_weight": 0, 00:22:43.602 "medium_priority_weight": 0, 00:22:43.602 "high_priority_weight": 0, 00:22:43.602 "nvme_adminq_poll_period_us": 10000, 00:22:43.602 "nvme_ioq_poll_period_us": 0, 00:22:43.602 "io_queue_requests": 0, 00:22:43.602 "delay_cmd_submit": true, 00:22:43.602 "transport_retry_count": 4, 00:22:43.602 "bdev_retry_count": 3, 00:22:43.602 "transport_ack_timeout": 0, 00:22:43.602 "ctrlr_loss_timeout_sec": 0, 00:22:43.602 "reconnect_delay_sec": 0, 00:22:43.602 "fast_io_fail_timeout_sec": 0, 00:22:43.602 "disable_auto_failback": false, 00:22:43.602 "generate_uuids": false, 00:22:43.602 "transport_tos": 0, 00:22:43.602 "nvme_error_stat": false, 00:22:43.602 "rdma_srq_size": 0, 00:22:43.602 "io_path_stat": false, 00:22:43.602 "allow_accel_sequence": false, 00:22:43.602 "rdma_max_cq_size": 0, 00:22:43.602 "rdma_cm_event_timeout_ms": 0, 00:22:43.602 "dhchap_digests": [ 00:22:43.602 "sha256", 00:22:43.602 "sha384", 00:22:43.602 "sha512" 00:22:43.602 ], 00:22:43.602 "dhchap_dhgroups": [ 00:22:43.602 "null", 00:22:43.602 "ffdhe2048", 00:22:43.602 "ffdhe3072", 00:22:43.602 "ffdhe4096", 00:22:43.602 "ffdhe6144", 00:22:43.602 "ffdhe8192" 00:22:43.602 ] 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "bdev_nvme_set_hotplug", 00:22:43.602 "params": { 00:22:43.602 "period_us": 100000, 00:22:43.602 "enable": false 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "bdev_malloc_create", 00:22:43.602 "params": { 00:22:43.602 "name": "malloc0", 00:22:43.602 "num_blocks": 8192, 00:22:43.602 "block_size": 4096, 00:22:43.602 "physical_block_size": 4096, 00:22:43.602 "uuid": "f95493ba-c348-40c4-8071-859e82366be6", 00:22:43.602 "optimal_io_boundary": 0 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "bdev_wait_for_examine" 00:22:43.602 } 00:22:43.602 ] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "nbd", 00:22:43.602 "config": [] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "scheduler", 00:22:43.602 "config": [ 00:22:43.602 { 00:22:43.602 "method": "framework_set_scheduler", 00:22:43.602 "params": { 00:22:43.602 "name": "static" 00:22:43.602 } 00:22:43.602 } 00:22:43.602 ] 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "subsystem": "nvmf", 00:22:43.602 "config": [ 00:22:43.602 { 00:22:43.602 "method": "nvmf_set_config", 00:22:43.602 "params": { 00:22:43.602 "discovery_filter": "match_any", 00:22:43.602 "admin_cmd_passthru": { 00:22:43.602 "identify_ctrlr": false 00:22:43.602 } 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "nvmf_set_max_subsystems", 00:22:43.602 "params": { 00:22:43.602 "max_subsystems": 1024 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "nvmf_set_crdt", 00:22:43.602 "params": { 00:22:43.602 "crdt1": 0, 00:22:43.602 "crdt2": 0, 00:22:43.602 "crdt3": 0 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "nvmf_create_transport", 00:22:43.602 "params": { 00:22:43.602 "trtype": "TCP", 00:22:43.602 "max_queue_depth": 128, 00:22:43.602 "max_io_qpairs_per_ctrlr": 127, 00:22:43.602 "in_capsule_data_size": 4096, 00:22:43.602 "max_io_size": 131072, 00:22:43.602 "io_unit_size": 131072, 00:22:43.602 "max_aq_depth": 128, 00:22:43.602 "num_shared_buffers": 511, 00:22:43.602 "buf_cache_size": 4294967295, 00:22:43.602 "dif_insert_or_strip": false, 00:22:43.602 "zcopy": false, 00:22:43.602 "c2h_success": false, 00:22:43.602 "sock_priority": 0, 00:22:43.602 "abort_timeout_sec": 1, 00:22:43.602 "ack_timeout": 0, 00:22:43.602 "data_wr_pool_size": 0 00:22:43.602 } 00:22:43.602 }, 00:22:43.602 { 00:22:43.602 "method": "nvmf_create_subsystem", 00:22:43.602 "params": { 00:22:43.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.602 "allow_any_host": false, 00:22:43.603 "serial_number": "SPDK00000000000001", 00:22:43.603 "model_number": "SPDK bdev Controller", 00:22:43.603 "max_namespaces": 10, 00:22:43.603 "min_cntlid": 1, 00:22:43.603 "max_cntlid": 65519, 00:22:43.603 "ana_reporting": false 00:22:43.603 } 00:22:43.603 }, 00:22:43.603 { 00:22:43.603 "method": "nvmf_subsystem_add_host", 00:22:43.603 "params": { 00:22:43.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.603 "host": "nqn.2016-06.io.spdk:host1", 00:22:43.603 "psk": "/tmp/tmp.lWhTPabYLA" 00:22:43.603 } 00:22:43.603 }, 00:22:43.603 { 00:22:43.603 "method": "nvmf_subsystem_add_ns", 00:22:43.603 "params": { 00:22:43.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.603 "namespace": { 00:22:43.603 "nsid": 1, 00:22:43.603 "bdev_name": "malloc0", 00:22:43.603 "nguid": "F95493BAC34840C48071859E82366BE6", 00:22:43.603 "uuid": "f95493ba-c348-40c4-8071-859e82366be6", 00:22:43.603 "no_auto_visible": false 00:22:43.603 } 00:22:43.603 } 00:22:43.603 }, 00:22:43.603 { 00:22:43.603 "method": "nvmf_subsystem_add_listener", 00:22:43.603 "params": { 00:22:43.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.603 "listen_address": { 00:22:43.603 "trtype": "TCP", 00:22:43.603 "adrfam": "IPv4", 00:22:43.603 "traddr": "10.0.0.2", 00:22:43.603 "trsvcid": "4420" 00:22:43.603 }, 00:22:43.603 "secure_channel": true 00:22:43.603 } 00:22:43.603 } 00:22:43.603 ] 00:22:43.603 } 00:22:43.603 ] 00:22:43.603 }' 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1146691 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1146691 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1146691 ']' 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.603 15:34:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.867 [2024-07-13 15:34:14.395394] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:43.867 [2024-07-13 15:34:14.395469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.867 [2024-07-13 15:34:14.433837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:43.867 [2024-07-13 15:34:14.460741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.867 [2024-07-13 15:34:14.550621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.867 [2024-07-13 15:34:14.550681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.867 [2024-07-13 15:34:14.550696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.867 [2024-07-13 15:34:14.550708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.867 [2024-07-13 15:34:14.550718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.867 [2024-07-13 15:34:14.550805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.126 [2024-07-13 15:34:14.784432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.126 [2024-07-13 15:34:14.800397] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:44.126 [2024-07-13 15:34:14.816451] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.126 [2024-07-13 15:34:14.825035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1146842 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1146842 /var/tmp/bdevperf.sock 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1146842 ']' 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.692 15:34:15 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:44.692 "subsystems": [ 00:22:44.692 { 00:22:44.692 "subsystem": "keyring", 00:22:44.692 "config": [] 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "subsystem": "iobuf", 00:22:44.692 "config": [ 00:22:44.692 { 00:22:44.692 "method": "iobuf_set_options", 00:22:44.692 "params": { 00:22:44.692 "small_pool_count": 8192, 00:22:44.692 "large_pool_count": 1024, 00:22:44.692 "small_bufsize": 8192, 00:22:44.692 "large_bufsize": 135168 00:22:44.692 } 00:22:44.692 } 00:22:44.692 ] 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "subsystem": "sock", 00:22:44.692 "config": [ 00:22:44.692 { 00:22:44.692 "method": "sock_set_default_impl", 00:22:44.692 "params": { 00:22:44.692 "impl_name": "posix" 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "sock_impl_set_options", 00:22:44.692 "params": { 00:22:44.692 "impl_name": "ssl", 00:22:44.692 "recv_buf_size": 4096, 00:22:44.692 "send_buf_size": 4096, 00:22:44.692 "enable_recv_pipe": true, 00:22:44.692 "enable_quickack": false, 00:22:44.692 "enable_placement_id": 0, 00:22:44.692 "enable_zerocopy_send_server": true, 00:22:44.692 "enable_zerocopy_send_client": false, 00:22:44.692 "zerocopy_threshold": 0, 00:22:44.692 "tls_version": 0, 00:22:44.692 "enable_ktls": false 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "sock_impl_set_options", 00:22:44.692 "params": { 00:22:44.692 "impl_name": "posix", 00:22:44.692 "recv_buf_size": 2097152, 00:22:44.692 "send_buf_size": 2097152, 00:22:44.692 "enable_recv_pipe": true, 00:22:44.692 "enable_quickack": false, 00:22:44.692 "enable_placement_id": 0, 00:22:44.692 "enable_zerocopy_send_server": true, 00:22:44.692 "enable_zerocopy_send_client": false, 00:22:44.692 "zerocopy_threshold": 0, 00:22:44.692 "tls_version": 0, 00:22:44.692 "enable_ktls": false 00:22:44.692 } 00:22:44.692 } 00:22:44.692 ] 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "subsystem": "vmd", 00:22:44.692 "config": [] 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "subsystem": "accel", 00:22:44.692 "config": [ 00:22:44.692 { 00:22:44.692 "method": "accel_set_options", 00:22:44.692 "params": { 00:22:44.692 "small_cache_size": 128, 00:22:44.692 "large_cache_size": 16, 00:22:44.692 "task_count": 2048, 00:22:44.692 "sequence_count": 2048, 00:22:44.692 "buf_count": 2048 00:22:44.692 } 00:22:44.692 } 00:22:44.692 ] 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "subsystem": "bdev", 00:22:44.692 "config": [ 00:22:44.692 { 00:22:44.692 "method": "bdev_set_options", 00:22:44.692 "params": { 00:22:44.692 "bdev_io_pool_size": 65535, 00:22:44.692 "bdev_io_cache_size": 256, 00:22:44.692 "bdev_auto_examine": true, 00:22:44.692 "iobuf_small_cache_size": 128, 00:22:44.692 "iobuf_large_cache_size": 16 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "bdev_raid_set_options", 00:22:44.692 "params": { 00:22:44.692 "process_window_size_kb": 1024 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "bdev_iscsi_set_options", 00:22:44.692 "params": { 00:22:44.692 "timeout_sec": 30 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "bdev_nvme_set_options", 00:22:44.692 "params": { 00:22:44.692 "action_on_timeout": "none", 00:22:44.692 "timeout_us": 0, 00:22:44.692 "timeout_admin_us": 0, 00:22:44.692 "keep_alive_timeout_ms": 10000, 00:22:44.692 "arbitration_burst": 0, 00:22:44.692 "low_priority_weight": 0, 00:22:44.692 "medium_priority_weight": 0, 00:22:44.692 "high_priority_weight": 0, 00:22:44.692 "nvme_adminq_poll_period_us": 10000, 00:22:44.692 "nvme_ioq_poll_period_us": 0, 00:22:44.692 "io_queue_requests": 512, 00:22:44.692 "delay_cmd_submit": true, 00:22:44.692 "transport_retry_count": 4, 00:22:44.692 "bdev_retry_count": 3, 00:22:44.692 "transport_ack_timeout": 0, 00:22:44.692 "ctrlr_loss_timeout_sec": 0, 00:22:44.692 "reconnect_delay_sec": 0, 00:22:44.692 "fast_io_fail_timeout_sec": 0, 00:22:44.692 "disable_auto_failback": false, 00:22:44.692 "generate_uuids": false, 00:22:44.692 "transport_tos": 0, 00:22:44.692 "nvme_error_stat": false, 00:22:44.692 "rdma_srq_size": 0, 00:22:44.692 "io_path_stat": false, 00:22:44.692 "allow_accel_sequence": false, 00:22:44.692 "rdma_max_cq_size": 0, 00:22:44.692 "rdma_cm_event_timeout_ms": 0, 00:22:44.692 "dhchap_digests": [ 00:22:44.692 "sha256", 00:22:44.692 "sha384", 00:22:44.692 "sha512" 00:22:44.692 ], 00:22:44.692 "dhchap_dhgroups": [ 00:22:44.692 "null", 00:22:44.692 "ffdhe2048", 00:22:44.692 "ffdhe3072", 00:22:44.692 "ffdhe4096", 00:22:44.692 "ffdhe6144", 00:22:44.692 "ffdhe8192" 00:22:44.692 ] 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "bdev_nvme_attach_controller", 00:22:44.692 "params": { 00:22:44.692 "name": "TLSTEST", 00:22:44.692 "trtype": "TCP", 00:22:44.692 "adrfam": "IPv4", 00:22:44.692 "traddr": "10.0.0.2", 00:22:44.692 "trsvcid": "4420", 00:22:44.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.692 "prchk_reftag": false, 00:22:44.692 "prchk_guard": false, 00:22:44.692 "ctrlr_loss_timeout_sec": 0, 00:22:44.692 "reconnect_delay_sec": 0, 00:22:44.692 "fast_io_fail_timeout_sec": 0, 00:22:44.692 "psk": "/tmp/tmp.lWhTPabYLA", 00:22:44.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.692 "hdgst": false, 00:22:44.692 "ddgst": false 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "bdev_nvme_set_hotplug", 00:22:44.692 "params": { 00:22:44.692 "period_us": 100000, 00:22:44.692 "enable": false 00:22:44.692 } 00:22:44.692 }, 00:22:44.692 { 00:22:44.692 "method": "bdev_wait_for_examine" 00:22:44.692 } 00:22:44.693 ] 00:22:44.693 }, 00:22:44.693 { 00:22:44.693 "subsystem": "nbd", 00:22:44.693 "config": [] 00:22:44.693 } 00:22:44.693 ] 00:22:44.693 }' 00:22:44.693 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.693 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.693 15:34:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.952 [2024-07-13 15:34:15.460676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:44.952 [2024-07-13 15:34:15.460776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146842 ] 00:22:44.952 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.952 [2024-07-13 15:34:15.493484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:44.952 [2024-07-13 15:34:15.521924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.952 [2024-07-13 15:34:15.605844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.211 [2024-07-13 15:34:15.774611] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.211 [2024-07-13 15:34:15.774752] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:45.776 15:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.776 15:34:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:45.776 15:34:16 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:45.776 Running I/O for 10 seconds... 00:22:57.989 00:22:57.989 Latency(us) 00:22:57.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.989 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:57.989 Verification LBA range: start 0x0 length 0x2000 00:22:57.989 TLSTESTn1 : 10.05 2506.95 9.79 0.00 0.00 50918.59 6213.78 90099.86 00:22:57.989 =================================================================================================================== 00:22:57.989 Total : 2506.95 9.79 0.00 0.00 50918.59 6213.78 90099.86 00:22:57.989 0 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1146842 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1146842 ']' 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1146842 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146842 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146842' 00:22:57.989 killing process with pid 1146842 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1146842 00:22:57.989 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.989 00:22:57.989 Latency(us) 00:22:57.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.989 =================================================================================================================== 00:22:57.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.989 [2024-07-13 15:34:26.653323] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1146842 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1146691 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1146691 ']' 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1146691 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1146691 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1146691' 00:22:57.989 killing process with pid 1146691 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1146691 00:22:57.989 [2024-07-13 15:34:26.896109] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:57.989 15:34:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1146691 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1148172 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1148172 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1148172 ']' 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.989 [2024-07-13 15:34:27.195722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:57.989 [2024-07-13 15:34:27.195816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.989 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.989 [2024-07-13 15:34:27.232703] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:57.989 [2024-07-13 15:34:27.260448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.989 [2024-07-13 15:34:27.343807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.989 [2024-07-13 15:34:27.343879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.989 [2024-07-13 15:34:27.343904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.989 [2024-07-13 15:34:27.343930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.989 [2024-07-13 15:34:27.343940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.989 [2024-07-13 15:34:27.343966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lWhTPabYLA 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lWhTPabYLA 00:22:57.989 15:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:57.989 [2024-07-13 15:34:27.750251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.990 15:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:57.990 15:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:57.990 [2024-07-13 15:34:28.319775] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.990 [2024-07-13 15:34:28.320048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.990 15:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:57.990 malloc0 00:22:57.990 15:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:58.247 15:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lWhTPabYLA 00:22:58.505 [2024-07-13 15:34:29.113608] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1148453 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1148453 /var/tmp/bdevperf.sock 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1148453 ']' 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.505 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.505 [2024-07-13 15:34:29.171805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:22:58.505 [2024-07-13 15:34:29.171886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148453 ] 00:22:58.505 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.505 [2024-07-13 15:34:29.202794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:58.505 [2024-07-13 15:34:29.234175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.763 [2024-07-13 15:34:29.326620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.763 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.763 15:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:58.763 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWhTPabYLA 00:22:59.021 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:59.279 [2024-07-13 15:34:29.896488] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.279 nvme0n1 00:22:59.279 15:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.537 Running I/O for 1 seconds... 00:23:00.474 00:23:00.474 Latency(us) 00:23:00.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.474 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:00.474 Verification LBA range: start 0x0 length 0x2000 00:23:00.474 nvme0n1 : 1.05 2341.88 9.15 0.00 0.00 53451.69 7621.59 87381.33 00:23:00.474 =================================================================================================================== 00:23:00.474 Total : 2341.88 9.15 0.00 0.00 53451.69 7621.59 87381.33 00:23:00.474 0 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1148453 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1148453 ']' 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1148453 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148453 00:23:00.474 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:00.475 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:00.475 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148453' 00:23:00.475 killing process with pid 1148453 00:23:00.475 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1148453 00:23:00.475 Received shutdown signal, test time was about 1.000000 seconds 00:23:00.475 00:23:00.475 Latency(us) 00:23:00.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.475 =================================================================================================================== 00:23:00.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.475 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1148453 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1148172 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1148172 ']' 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1148172 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148172 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148172' 00:23:00.735 killing process with pid 1148172 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1148172 00:23:00.735 [2024-07-13 15:34:31.444317] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:00.735 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1148172 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1148740 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1148740 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1148740 ']' 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.994 15:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.994 [2024-07-13 15:34:31.740421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:00.994 [2024-07-13 15:34:31.740528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.254 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.254 [2024-07-13 15:34:31.779632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:01.254 [2024-07-13 15:34:31.811801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.254 [2024-07-13 15:34:31.899574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.254 [2024-07-13 15:34:31.899641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.254 [2024-07-13 15:34:31.899675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.254 [2024-07-13 15:34:31.899690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.254 [2024-07-13 15:34:31.899702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.254 [2024-07-13 15:34:31.899732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.254 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.254 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:01.254 15:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.254 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.254 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.512 [2024-07-13 15:34:32.051600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.512 malloc0 00:23:01.512 [2024-07-13 15:34:32.084526] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.512 [2024-07-13 15:34:32.084796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1148761 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1148761 /var/tmp/bdevperf.sock 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1148761 ']' 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.512 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.513 [2024-07-13 15:34:32.159450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:01.513 [2024-07-13 15:34:32.159527] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148761 ] 00:23:01.513 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.513 [2024-07-13 15:34:32.196859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:01.513 [2024-07-13 15:34:32.226702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.771 [2024-07-13 15:34:32.318052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.771 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.771 15:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:01.771 15:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWhTPabYLA 00:23:02.028 15:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:02.296 [2024-07-13 15:34:32.929300] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.296 nvme0n1 00:23:02.296 15:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.560 Running I/O for 1 seconds... 00:23:03.498 00:23:03.498 Latency(us) 00:23:03.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.498 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:03.498 Verification LBA range: start 0x0 length 0x2000 00:23:03.498 nvme0n1 : 1.05 2354.02 9.20 0.00 0.00 53258.39 8641.04 78837.38 00:23:03.498 =================================================================================================================== 00:23:03.498 Total : 2354.02 9.20 0.00 0.00 53258.39 8641.04 78837.38 00:23:03.498 0 00:23:03.498 15:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:03.498 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.498 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.757 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.757 15:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:03.757 "subsystems": [ 00:23:03.757 { 00:23:03.757 "subsystem": "keyring", 00:23:03.757 "config": [ 00:23:03.757 { 00:23:03.757 "method": "keyring_file_add_key", 00:23:03.757 "params": { 00:23:03.757 "name": "key0", 00:23:03.757 "path": "/tmp/tmp.lWhTPabYLA" 00:23:03.757 } 00:23:03.757 } 00:23:03.757 ] 00:23:03.757 }, 00:23:03.757 { 00:23:03.757 "subsystem": "iobuf", 00:23:03.757 "config": [ 00:23:03.757 { 00:23:03.757 "method": "iobuf_set_options", 00:23:03.757 "params": { 00:23:03.757 "small_pool_count": 8192, 00:23:03.757 "large_pool_count": 1024, 00:23:03.757 "small_bufsize": 8192, 00:23:03.757 "large_bufsize": 135168 00:23:03.757 } 00:23:03.757 } 00:23:03.757 ] 00:23:03.757 }, 00:23:03.757 { 00:23:03.758 "subsystem": "sock", 00:23:03.758 "config": [ 00:23:03.758 { 00:23:03.758 "method": "sock_set_default_impl", 00:23:03.758 "params": { 00:23:03.758 "impl_name": "posix" 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "sock_impl_set_options", 00:23:03.758 "params": { 00:23:03.758 "impl_name": "ssl", 00:23:03.758 "recv_buf_size": 4096, 00:23:03.758 "send_buf_size": 4096, 00:23:03.758 "enable_recv_pipe": true, 00:23:03.758 "enable_quickack": false, 00:23:03.758 "enable_placement_id": 0, 00:23:03.758 "enable_zerocopy_send_server": true, 00:23:03.758 "enable_zerocopy_send_client": false, 00:23:03.758 "zerocopy_threshold": 0, 00:23:03.758 "tls_version": 0, 00:23:03.758 "enable_ktls": false 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "sock_impl_set_options", 00:23:03.758 "params": { 00:23:03.758 "impl_name": "posix", 00:23:03.758 "recv_buf_size": 2097152, 00:23:03.758 "send_buf_size": 2097152, 00:23:03.758 "enable_recv_pipe": true, 00:23:03.758 "enable_quickack": false, 00:23:03.758 "enable_placement_id": 0, 00:23:03.758 "enable_zerocopy_send_server": true, 00:23:03.758 "enable_zerocopy_send_client": false, 00:23:03.758 "zerocopy_threshold": 0, 00:23:03.758 "tls_version": 0, 00:23:03.758 "enable_ktls": false 00:23:03.758 } 00:23:03.758 } 00:23:03.758 ] 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "subsystem": "vmd", 00:23:03.758 "config": [] 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "subsystem": "accel", 00:23:03.758 "config": [ 00:23:03.758 { 00:23:03.758 "method": "accel_set_options", 00:23:03.758 "params": { 00:23:03.758 "small_cache_size": 128, 00:23:03.758 "large_cache_size": 16, 00:23:03.758 "task_count": 2048, 00:23:03.758 "sequence_count": 2048, 00:23:03.758 "buf_count": 2048 00:23:03.758 } 00:23:03.758 } 00:23:03.758 ] 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "subsystem": "bdev", 00:23:03.758 "config": [ 00:23:03.758 { 00:23:03.758 "method": "bdev_set_options", 00:23:03.758 "params": { 00:23:03.758 "bdev_io_pool_size": 65535, 00:23:03.758 "bdev_io_cache_size": 256, 00:23:03.758 "bdev_auto_examine": true, 00:23:03.758 "iobuf_small_cache_size": 128, 00:23:03.758 "iobuf_large_cache_size": 16 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "bdev_raid_set_options", 00:23:03.758 "params": { 00:23:03.758 "process_window_size_kb": 1024 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "bdev_iscsi_set_options", 00:23:03.758 "params": { 00:23:03.758 "timeout_sec": 30 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "bdev_nvme_set_options", 00:23:03.758 "params": { 00:23:03.758 "action_on_timeout": "none", 00:23:03.758 "timeout_us": 0, 00:23:03.758 "timeout_admin_us": 0, 00:23:03.758 "keep_alive_timeout_ms": 10000, 00:23:03.758 "arbitration_burst": 0, 00:23:03.758 "low_priority_weight": 0, 00:23:03.758 "medium_priority_weight": 0, 00:23:03.758 "high_priority_weight": 0, 00:23:03.758 "nvme_adminq_poll_period_us": 10000, 00:23:03.758 "nvme_ioq_poll_period_us": 0, 00:23:03.758 "io_queue_requests": 0, 00:23:03.758 "delay_cmd_submit": true, 00:23:03.758 "transport_retry_count": 4, 00:23:03.758 "bdev_retry_count": 3, 00:23:03.758 "transport_ack_timeout": 0, 00:23:03.758 "ctrlr_loss_timeout_sec": 0, 00:23:03.758 "reconnect_delay_sec": 0, 00:23:03.758 "fast_io_fail_timeout_sec": 0, 00:23:03.758 "disable_auto_failback": false, 00:23:03.758 "generate_uuids": false, 00:23:03.758 "transport_tos": 0, 00:23:03.758 "nvme_error_stat": false, 00:23:03.758 "rdma_srq_size": 0, 00:23:03.758 "io_path_stat": false, 00:23:03.758 "allow_accel_sequence": false, 00:23:03.758 "rdma_max_cq_size": 0, 00:23:03.758 "rdma_cm_event_timeout_ms": 0, 00:23:03.758 "dhchap_digests": [ 00:23:03.758 "sha256", 00:23:03.758 "sha384", 00:23:03.758 "sha512" 00:23:03.758 ], 00:23:03.758 "dhchap_dhgroups": [ 00:23:03.758 "null", 00:23:03.758 "ffdhe2048", 00:23:03.758 "ffdhe3072", 00:23:03.758 "ffdhe4096", 00:23:03.758 "ffdhe6144", 00:23:03.758 "ffdhe8192" 00:23:03.758 ] 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "bdev_nvme_set_hotplug", 00:23:03.758 "params": { 00:23:03.758 "period_us": 100000, 00:23:03.758 "enable": false 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "bdev_malloc_create", 00:23:03.758 "params": { 00:23:03.758 "name": "malloc0", 00:23:03.758 "num_blocks": 8192, 00:23:03.758 "block_size": 4096, 00:23:03.758 "physical_block_size": 4096, 00:23:03.758 "uuid": "eee58eed-3a35-4952-9b00-261986fc4374", 00:23:03.758 "optimal_io_boundary": 0 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "bdev_wait_for_examine" 00:23:03.758 } 00:23:03.758 ] 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "subsystem": "nbd", 00:23:03.758 "config": [] 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "subsystem": "scheduler", 00:23:03.758 "config": [ 00:23:03.758 { 00:23:03.758 "method": "framework_set_scheduler", 00:23:03.758 "params": { 00:23:03.758 "name": "static" 00:23:03.758 } 00:23:03.758 } 00:23:03.758 ] 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "subsystem": "nvmf", 00:23:03.758 "config": [ 00:23:03.758 { 00:23:03.758 "method": "nvmf_set_config", 00:23:03.758 "params": { 00:23:03.758 "discovery_filter": "match_any", 00:23:03.758 "admin_cmd_passthru": { 00:23:03.758 "identify_ctrlr": false 00:23:03.758 } 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_set_max_subsystems", 00:23:03.758 "params": { 00:23:03.758 "max_subsystems": 1024 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_set_crdt", 00:23:03.758 "params": { 00:23:03.758 "crdt1": 0, 00:23:03.758 "crdt2": 0, 00:23:03.758 "crdt3": 0 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_create_transport", 00:23:03.758 "params": { 00:23:03.758 "trtype": "TCP", 00:23:03.758 "max_queue_depth": 128, 00:23:03.758 "max_io_qpairs_per_ctrlr": 127, 00:23:03.758 "in_capsule_data_size": 4096, 00:23:03.758 "max_io_size": 131072, 00:23:03.758 "io_unit_size": 131072, 00:23:03.758 "max_aq_depth": 128, 00:23:03.758 "num_shared_buffers": 511, 00:23:03.758 "buf_cache_size": 4294967295, 00:23:03.758 "dif_insert_or_strip": false, 00:23:03.758 "zcopy": false, 00:23:03.758 "c2h_success": false, 00:23:03.758 "sock_priority": 0, 00:23:03.758 "abort_timeout_sec": 1, 00:23:03.758 "ack_timeout": 0, 00:23:03.758 "data_wr_pool_size": 0 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_create_subsystem", 00:23:03.758 "params": { 00:23:03.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.758 "allow_any_host": false, 00:23:03.758 "serial_number": "00000000000000000000", 00:23:03.758 "model_number": "SPDK bdev Controller", 00:23:03.758 "max_namespaces": 32, 00:23:03.758 "min_cntlid": 1, 00:23:03.758 "max_cntlid": 65519, 00:23:03.758 "ana_reporting": false 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_subsystem_add_host", 00:23:03.758 "params": { 00:23:03.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.758 "host": "nqn.2016-06.io.spdk:host1", 00:23:03.758 "psk": "key0" 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_subsystem_add_ns", 00:23:03.758 "params": { 00:23:03.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.758 "namespace": { 00:23:03.758 "nsid": 1, 00:23:03.758 "bdev_name": "malloc0", 00:23:03.758 "nguid": "EEE58EED3A3549529B00261986FC4374", 00:23:03.758 "uuid": "eee58eed-3a35-4952-9b00-261986fc4374", 00:23:03.758 "no_auto_visible": false 00:23:03.758 } 00:23:03.758 } 00:23:03.758 }, 00:23:03.758 { 00:23:03.758 "method": "nvmf_subsystem_add_listener", 00:23:03.758 "params": { 00:23:03.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.758 "listen_address": { 00:23:03.758 "trtype": "TCP", 00:23:03.758 "adrfam": "IPv4", 00:23:03.758 "traddr": "10.0.0.2", 00:23:03.758 "trsvcid": "4420" 00:23:03.758 }, 00:23:03.758 "secure_channel": true 00:23:03.758 } 00:23:03.758 } 00:23:03.758 ] 00:23:03.758 } 00:23:03.758 ] 00:23:03.758 }' 00:23:03.758 15:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:04.019 15:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:04.019 "subsystems": [ 00:23:04.019 { 00:23:04.019 "subsystem": "keyring", 00:23:04.019 "config": [ 00:23:04.019 { 00:23:04.019 "method": "keyring_file_add_key", 00:23:04.019 "params": { 00:23:04.019 "name": "key0", 00:23:04.019 "path": "/tmp/tmp.lWhTPabYLA" 00:23:04.019 } 00:23:04.019 } 00:23:04.019 ] 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "subsystem": "iobuf", 00:23:04.019 "config": [ 00:23:04.019 { 00:23:04.019 "method": "iobuf_set_options", 00:23:04.019 "params": { 00:23:04.019 "small_pool_count": 8192, 00:23:04.019 "large_pool_count": 1024, 00:23:04.019 "small_bufsize": 8192, 00:23:04.019 "large_bufsize": 135168 00:23:04.019 } 00:23:04.019 } 00:23:04.019 ] 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "subsystem": "sock", 00:23:04.019 "config": [ 00:23:04.019 { 00:23:04.019 "method": "sock_set_default_impl", 00:23:04.019 "params": { 00:23:04.019 "impl_name": "posix" 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "sock_impl_set_options", 00:23:04.019 "params": { 00:23:04.019 "impl_name": "ssl", 00:23:04.019 "recv_buf_size": 4096, 00:23:04.019 "send_buf_size": 4096, 00:23:04.019 "enable_recv_pipe": true, 00:23:04.019 "enable_quickack": false, 00:23:04.019 "enable_placement_id": 0, 00:23:04.019 "enable_zerocopy_send_server": true, 00:23:04.019 "enable_zerocopy_send_client": false, 00:23:04.019 "zerocopy_threshold": 0, 00:23:04.019 "tls_version": 0, 00:23:04.019 "enable_ktls": false 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "sock_impl_set_options", 00:23:04.019 "params": { 00:23:04.019 "impl_name": "posix", 00:23:04.019 "recv_buf_size": 2097152, 00:23:04.019 "send_buf_size": 2097152, 00:23:04.019 "enable_recv_pipe": true, 00:23:04.019 "enable_quickack": false, 00:23:04.019 "enable_placement_id": 0, 00:23:04.019 "enable_zerocopy_send_server": true, 00:23:04.019 "enable_zerocopy_send_client": false, 00:23:04.019 "zerocopy_threshold": 0, 00:23:04.019 "tls_version": 0, 00:23:04.019 "enable_ktls": false 00:23:04.019 } 00:23:04.019 } 00:23:04.019 ] 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "subsystem": "vmd", 00:23:04.019 "config": [] 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "subsystem": "accel", 00:23:04.019 "config": [ 00:23:04.019 { 00:23:04.019 "method": "accel_set_options", 00:23:04.019 "params": { 00:23:04.019 "small_cache_size": 128, 00:23:04.019 "large_cache_size": 16, 00:23:04.019 "task_count": 2048, 00:23:04.019 "sequence_count": 2048, 00:23:04.019 "buf_count": 2048 00:23:04.019 } 00:23:04.019 } 00:23:04.019 ] 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "subsystem": "bdev", 00:23:04.019 "config": [ 00:23:04.019 { 00:23:04.019 "method": "bdev_set_options", 00:23:04.019 "params": { 00:23:04.019 "bdev_io_pool_size": 65535, 00:23:04.019 "bdev_io_cache_size": 256, 00:23:04.019 "bdev_auto_examine": true, 00:23:04.019 "iobuf_small_cache_size": 128, 00:23:04.019 "iobuf_large_cache_size": 16 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "bdev_raid_set_options", 00:23:04.019 "params": { 00:23:04.019 "process_window_size_kb": 1024 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "bdev_iscsi_set_options", 00:23:04.019 "params": { 00:23:04.019 "timeout_sec": 30 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "bdev_nvme_set_options", 00:23:04.019 "params": { 00:23:04.019 "action_on_timeout": "none", 00:23:04.019 "timeout_us": 0, 00:23:04.019 "timeout_admin_us": 0, 00:23:04.019 "keep_alive_timeout_ms": 10000, 00:23:04.019 "arbitration_burst": 0, 00:23:04.019 "low_priority_weight": 0, 00:23:04.019 "medium_priority_weight": 0, 00:23:04.019 "high_priority_weight": 0, 00:23:04.019 "nvme_adminq_poll_period_us": 10000, 00:23:04.019 "nvme_ioq_poll_period_us": 0, 00:23:04.019 "io_queue_requests": 512, 00:23:04.019 "delay_cmd_submit": true, 00:23:04.019 "transport_retry_count": 4, 00:23:04.019 "bdev_retry_count": 3, 00:23:04.019 "transport_ack_timeout": 0, 00:23:04.019 "ctrlr_loss_timeout_sec": 0, 00:23:04.019 "reconnect_delay_sec": 0, 00:23:04.019 "fast_io_fail_timeout_sec": 0, 00:23:04.019 "disable_auto_failback": false, 00:23:04.019 "generate_uuids": false, 00:23:04.019 "transport_tos": 0, 00:23:04.019 "nvme_error_stat": false, 00:23:04.019 "rdma_srq_size": 0, 00:23:04.019 "io_path_stat": false, 00:23:04.019 "allow_accel_sequence": false, 00:23:04.019 "rdma_max_cq_size": 0, 00:23:04.019 "rdma_cm_event_timeout_ms": 0, 00:23:04.019 "dhchap_digests": [ 00:23:04.019 "sha256", 00:23:04.019 "sha384", 00:23:04.019 "sha512" 00:23:04.019 ], 00:23:04.019 "dhchap_dhgroups": [ 00:23:04.019 "null", 00:23:04.019 "ffdhe2048", 00:23:04.019 "ffdhe3072", 00:23:04.019 "ffdhe4096", 00:23:04.019 "ffdhe6144", 00:23:04.019 "ffdhe8192" 00:23:04.019 ] 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "bdev_nvme_attach_controller", 00:23:04.019 "params": { 00:23:04.019 "name": "nvme0", 00:23:04.019 "trtype": "TCP", 00:23:04.019 "adrfam": "IPv4", 00:23:04.019 "traddr": "10.0.0.2", 00:23:04.019 "trsvcid": "4420", 00:23:04.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.019 "prchk_reftag": false, 00:23:04.019 "prchk_guard": false, 00:23:04.019 "ctrlr_loss_timeout_sec": 0, 00:23:04.019 "reconnect_delay_sec": 0, 00:23:04.019 "fast_io_fail_timeout_sec": 0, 00:23:04.019 "psk": "key0", 00:23:04.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.019 "hdgst": false, 00:23:04.019 "ddgst": false 00:23:04.019 } 00:23:04.019 }, 00:23:04.019 { 00:23:04.019 "method": "bdev_nvme_set_hotplug", 00:23:04.019 "params": { 00:23:04.019 "period_us": 100000, 00:23:04.020 "enable": false 00:23:04.020 } 00:23:04.020 }, 00:23:04.020 { 00:23:04.020 "method": "bdev_enable_histogram", 00:23:04.020 "params": { 00:23:04.020 "name": "nvme0n1", 00:23:04.020 "enable": true 00:23:04.020 } 00:23:04.020 }, 00:23:04.020 { 00:23:04.020 "method": "bdev_wait_for_examine" 00:23:04.020 } 00:23:04.020 ] 00:23:04.020 }, 00:23:04.020 { 00:23:04.020 "subsystem": "nbd", 00:23:04.020 "config": [] 00:23:04.020 } 00:23:04.020 ] 00:23:04.020 }' 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1148761 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1148761 ']' 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1148761 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148761 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148761' 00:23:04.020 killing process with pid 1148761 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1148761 00:23:04.020 Received shutdown signal, test time was about 1.000000 seconds 00:23:04.020 00:23:04.020 Latency(us) 00:23:04.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.020 =================================================================================================================== 00:23:04.020 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.020 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1148761 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1148740 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1148740 ']' 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1148740 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148740 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148740' 00:23:04.288 killing process with pid 1148740 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1148740 00:23:04.288 15:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1148740 00:23:04.548 15:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:04.548 15:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:04.548 "subsystems": [ 00:23:04.548 { 00:23:04.548 "subsystem": "keyring", 00:23:04.548 "config": [ 00:23:04.548 { 00:23:04.548 "method": "keyring_file_add_key", 00:23:04.548 "params": { 00:23:04.548 "name": "key0", 00:23:04.548 "path": "/tmp/tmp.lWhTPabYLA" 00:23:04.548 } 00:23:04.548 } 00:23:04.548 ] 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "subsystem": "iobuf", 00:23:04.548 "config": [ 00:23:04.548 { 00:23:04.548 "method": "iobuf_set_options", 00:23:04.548 "params": { 00:23:04.548 "small_pool_count": 8192, 00:23:04.548 "large_pool_count": 1024, 00:23:04.548 "small_bufsize": 8192, 00:23:04.548 "large_bufsize": 135168 00:23:04.548 } 00:23:04.548 } 00:23:04.548 ] 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "subsystem": "sock", 00:23:04.548 "config": [ 00:23:04.548 { 00:23:04.548 "method": "sock_set_default_impl", 00:23:04.548 "params": { 00:23:04.548 "impl_name": "posix" 00:23:04.548 } 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "method": "sock_impl_set_options", 00:23:04.548 "params": { 00:23:04.548 "impl_name": "ssl", 00:23:04.548 "recv_buf_size": 4096, 00:23:04.548 "send_buf_size": 4096, 00:23:04.548 "enable_recv_pipe": true, 00:23:04.548 "enable_quickack": false, 00:23:04.548 "enable_placement_id": 0, 00:23:04.548 "enable_zerocopy_send_server": true, 00:23:04.548 "enable_zerocopy_send_client": false, 00:23:04.548 "zerocopy_threshold": 0, 00:23:04.548 "tls_version": 0, 00:23:04.548 "enable_ktls": false 00:23:04.548 } 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "method": "sock_impl_set_options", 00:23:04.548 "params": { 00:23:04.548 "impl_name": "posix", 00:23:04.548 "recv_buf_size": 2097152, 00:23:04.548 "send_buf_size": 2097152, 00:23:04.548 "enable_recv_pipe": true, 00:23:04.548 "enable_quickack": false, 00:23:04.548 "enable_placement_id": 0, 00:23:04.548 "enable_zerocopy_send_server": true, 00:23:04.548 "enable_zerocopy_send_client": false, 00:23:04.548 "zerocopy_threshold": 0, 00:23:04.548 "tls_version": 0, 00:23:04.548 "enable_ktls": false 00:23:04.548 } 00:23:04.548 } 00:23:04.548 ] 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "subsystem": "vmd", 00:23:04.548 "config": [] 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "subsystem": "accel", 00:23:04.548 "config": [ 00:23:04.548 { 00:23:04.548 "method": "accel_set_options", 00:23:04.548 "params": { 00:23:04.548 "small_cache_size": 128, 00:23:04.548 "large_cache_size": 16, 00:23:04.548 "task_count": 2048, 00:23:04.548 "sequence_count": 2048, 00:23:04.548 "buf_count": 2048 00:23:04.548 } 00:23:04.548 } 00:23:04.548 ] 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "subsystem": "bdev", 00:23:04.548 "config": [ 00:23:04.548 { 00:23:04.548 "method": "bdev_set_options", 00:23:04.548 "params": { 00:23:04.548 "bdev_io_pool_size": 65535, 00:23:04.548 "bdev_io_cache_size": 256, 00:23:04.548 "bdev_auto_examine": true, 00:23:04.548 "iobuf_small_cache_size": 128, 00:23:04.548 "iobuf_large_cache_size": 16 00:23:04.548 } 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "method": "bdev_raid_set_options", 00:23:04.548 "params": { 00:23:04.548 "process_window_size_kb": 1024 00:23:04.548 } 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "method": "bdev_iscsi_set_options", 00:23:04.548 "params": { 00:23:04.548 "timeout_sec": 30 00:23:04.548 } 00:23:04.548 }, 00:23:04.548 { 00:23:04.548 "method": "bdev_nvme_set_options", 00:23:04.548 "params": { 00:23:04.549 "action_on_timeout": "none", 00:23:04.549 "timeout_us": 0, 00:23:04.549 "timeout_admin_us": 0, 00:23:04.549 "keep_alive_timeout_ms": 10000, 00:23:04.549 "arbitration_burst": 0, 00:23:04.549 "low_priority_weight": 0, 00:23:04.549 "medium_priority_weight": 0, 00:23:04.549 "high_priority_weight": 0, 00:23:04.549 "nvme_adminq_poll_period_us": 10000, 00:23:04.549 "nvme_ioq_poll_period_us": 0, 00:23:04.549 "io_queue_requests": 0, 00:23:04.549 "delay_cmd_submit": true, 00:23:04.549 "transport_retry_count": 4, 00:23:04.549 "bdev_retry_count": 3, 00:23:04.549 "transport_ack_timeout": 0, 00:23:04.549 "ctrlr_loss_timeout_sec": 0, 00:23:04.549 "reconnect_delay_sec": 0, 00:23:04.549 "fast_io_fail_timeout_sec": 0, 00:23:04.549 "disable_auto_failback": false, 00:23:04.549 "generate_uuids": false, 00:23:04.549 "transport_tos": 0, 00:23:04.549 "nvme_error_stat": false, 00:23:04.549 "rdma_srq_size": 0, 00:23:04.549 "io_path_stat": false, 00:23:04.549 "allow_accel_sequence": false, 00:23:04.549 "rdma_max_cq_size": 0, 00:23:04.549 "rdma_cm_event_timeout_ms": 0, 00:23:04.549 "dhchap_digests": [ 00:23:04.549 "sha256", 00:23:04.549 "sha384", 00:23:04.549 "sha512" 00:23:04.549 ], 00:23:04.549 "dhchap_dhgroups": [ 00:23:04.549 "null", 00:23:04.549 "ffdhe2048", 00:23:04.549 "ffdhe3072", 00:23:04.549 "ffdhe4096", 00:23:04.549 "ffdhe6144", 00:23:04.549 "ffdhe8192" 00:23:04.549 ] 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "bdev_nvme_set_hotplug", 00:23:04.549 "params": { 00:23:04.549 "period_us": 100000, 00:23:04.549 "enable": false 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "bdev_malloc_create", 00:23:04.549 "params": { 00:23:04.549 "name": "malloc0", 00:23:04.549 "num_blocks": 8192, 00:23:04.549 "block_size": 4096, 00:23:04.549 "physical_block_size": 4096, 00:23:04.549 "uuid": "eee58eed-3a35-4952-9b00-261986fc4374", 00:23:04.549 "optimal_io_boundary": 0 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "bdev_wait_for_examine" 00:23:04.549 } 00:23:04.549 ] 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "subsystem": "nbd", 00:23:04.549 "config": [] 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "subsystem": "scheduler", 00:23:04.549 "config": [ 00:23:04.549 { 00:23:04.549 "method": "framework_set_scheduler", 00:23:04.549 "params": { 00:23:04.549 "name": "static" 00:23:04.549 } 00:23:04.549 } 00:23:04.549 ] 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "subsystem": "nvmf", 00:23:04.549 "config": [ 00:23:04.549 { 00:23:04.549 "method": "nvmf_set_config", 00:23:04.549 "params": { 00:23:04.549 "discovery_filter": "match_any", 00:23:04.549 "admin_cmd_passthru": { 00:23:04.549 "identify_ctrlr": false 00:23:04.549 } 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_set_max_subsystems", 00:23:04.549 "params": { 00:23:04.549 "max_subsystems": 1024 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_set_crdt", 00:23:04.549 "params": { 00:23:04.549 "crdt1": 0, 00:23:04.549 "crdt2": 0, 00:23:04.549 "crdt3": 0 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_create_transport", 00:23:04.549 "params": { 00:23:04.549 "trtype": "TCP", 00:23:04.549 "max_queue_depth": 128, 00:23:04.549 "max_io_qpairs_per_ctrlr": 127, 00:23:04.549 "in_capsule_data_size": 4096, 00:23:04.549 "max_io_size": 131072, 00:23:04.549 "io_unit_size": 131072, 00:23:04.549 "max_aq_depth": 128, 00:23:04.549 "num_shared_buffers": 511, 00:23:04.549 "buf_cache_size": 4294967295, 00:23:04.549 "dif_insert_or_strip": false, 00:23:04.549 "zcopy": false, 00:23:04.549 "c2h_success": false, 00:23:04.549 "sock_priority": 0, 00:23:04.549 "abort_timeout_sec": 1, 00:23:04.549 "ack_timeout": 0, 00:23:04.549 "data_wr_pool_size": 0 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_create_subsystem", 00:23:04.549 "params": { 00:23:04.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.549 "allow_any_host": false, 00:23:04.549 "serial_number": "00000000000000000000", 00:23:04.549 "model_number": "SPDK bdev Controller", 00:23:04.549 "max_namespaces": 32, 00:23:04.549 "min_cntlid": 1, 00:23:04.549 "max_cntlid": 65519, 00:23:04.549 "ana_reporting": false 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_subsystem_add_host", 00:23:04.549 "params": { 00:23:04.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.549 "host": "nqn.2016-06.io.spdk:host1", 00:23:04.549 "psk": "key0" 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_subsystem_add_ns", 00:23:04.549 "params": { 00:23:04.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.549 "namespace": { 00:23:04.549 "nsid": 1, 00:23:04.549 "bdev_name": "malloc0", 00:23:04.549 "nguid": "EEE58EED3A3549529B00261986FC4374", 00:23:04.549 "uuid": "eee58eed-3a35-4952-9b00-261986fc4374", 00:23:04.549 "no_auto_visible": false 00:23:04.549 } 00:23:04.549 } 00:23:04.549 }, 00:23:04.549 { 00:23:04.549 "method": "nvmf_subsystem_add_listener", 00:23:04.549 "params": { 00:23:04.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.549 "listen_address": { 00:23:04.549 "trtype": "TCP", 00:23:04.549 "adrfam": "IPv4", 00:23:04.549 "traddr": "10.0.0.2", 00:23:04.549 "trsvcid": "4420" 00:23:04.549 }, 00:23:04.549 "secure_channel": true 00:23:04.549 } 00:23:04.549 } 00:23:04.549 ] 00:23:04.549 } 00:23:04.549 ] 00:23:04.549 }' 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1149169 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1149169 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1149169 ']' 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.549 15:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.549 [2024-07-13 15:34:35.162435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:04.549 [2024-07-13 15:34:35.162513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.549 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.549 [2024-07-13 15:34:35.201766] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:04.549 [2024-07-13 15:34:35.228844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.809 [2024-07-13 15:34:35.317862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.809 [2024-07-13 15:34:35.317945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.809 [2024-07-13 15:34:35.317959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.809 [2024-07-13 15:34:35.317971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.809 [2024-07-13 15:34:35.317981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.809 [2024-07-13 15:34:35.318051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.809 [2024-07-13 15:34:35.562173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.068 [2024-07-13 15:34:35.594184] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.068 [2024-07-13 15:34:35.602068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1149319 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1149319 /var/tmp/bdevperf.sock 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1149319 ']' 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.637 15:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:05.637 "subsystems": [ 00:23:05.637 { 00:23:05.637 "subsystem": "keyring", 00:23:05.637 "config": [ 00:23:05.637 { 00:23:05.637 "method": "keyring_file_add_key", 00:23:05.637 "params": { 00:23:05.637 "name": "key0", 00:23:05.637 "path": "/tmp/tmp.lWhTPabYLA" 00:23:05.637 } 00:23:05.637 } 00:23:05.637 ] 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "subsystem": "iobuf", 00:23:05.637 "config": [ 00:23:05.637 { 00:23:05.637 "method": "iobuf_set_options", 00:23:05.637 "params": { 00:23:05.637 "small_pool_count": 8192, 00:23:05.637 "large_pool_count": 1024, 00:23:05.637 "small_bufsize": 8192, 00:23:05.637 "large_bufsize": 135168 00:23:05.637 } 00:23:05.637 } 00:23:05.637 ] 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "subsystem": "sock", 00:23:05.637 "config": [ 00:23:05.637 { 00:23:05.637 "method": "sock_set_default_impl", 00:23:05.637 "params": { 00:23:05.637 "impl_name": "posix" 00:23:05.637 } 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "method": "sock_impl_set_options", 00:23:05.637 "params": { 00:23:05.637 "impl_name": "ssl", 00:23:05.637 "recv_buf_size": 4096, 00:23:05.637 "send_buf_size": 4096, 00:23:05.637 "enable_recv_pipe": true, 00:23:05.637 "enable_quickack": false, 00:23:05.637 "enable_placement_id": 0, 00:23:05.637 "enable_zerocopy_send_server": true, 00:23:05.637 "enable_zerocopy_send_client": false, 00:23:05.637 "zerocopy_threshold": 0, 00:23:05.637 "tls_version": 0, 00:23:05.637 "enable_ktls": false 00:23:05.637 } 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "method": "sock_impl_set_options", 00:23:05.637 "params": { 00:23:05.637 "impl_name": "posix", 00:23:05.637 "recv_buf_size": 2097152, 00:23:05.637 "send_buf_size": 2097152, 00:23:05.637 "enable_recv_pipe": true, 00:23:05.637 "enable_quickack": false, 00:23:05.637 "enable_placement_id": 0, 00:23:05.637 "enable_zerocopy_send_server": true, 00:23:05.637 "enable_zerocopy_send_client": false, 00:23:05.637 "zerocopy_threshold": 0, 00:23:05.637 "tls_version": 0, 00:23:05.637 "enable_ktls": false 00:23:05.637 } 00:23:05.637 } 00:23:05.637 ] 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "subsystem": "vmd", 00:23:05.637 "config": [] 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "subsystem": "accel", 00:23:05.637 "config": [ 00:23:05.637 { 00:23:05.637 "method": "accel_set_options", 00:23:05.637 "params": { 00:23:05.637 "small_cache_size": 128, 00:23:05.637 "large_cache_size": 16, 00:23:05.637 "task_count": 2048, 00:23:05.637 "sequence_count": 2048, 00:23:05.637 "buf_count": 2048 00:23:05.637 } 00:23:05.637 } 00:23:05.637 ] 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "subsystem": "bdev", 00:23:05.637 "config": [ 00:23:05.637 { 00:23:05.637 "method": "bdev_set_options", 00:23:05.637 "params": { 00:23:05.637 "bdev_io_pool_size": 65535, 00:23:05.637 "bdev_io_cache_size": 256, 00:23:05.637 "bdev_auto_examine": true, 00:23:05.637 "iobuf_small_cache_size": 128, 00:23:05.637 "iobuf_large_cache_size": 16 00:23:05.637 } 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "method": "bdev_raid_set_options", 00:23:05.637 "params": { 00:23:05.637 "process_window_size_kb": 1024 00:23:05.637 } 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "method": "bdev_iscsi_set_options", 00:23:05.637 "params": { 00:23:05.637 "timeout_sec": 30 00:23:05.637 } 00:23:05.637 }, 00:23:05.637 { 00:23:05.637 "method": "bdev_nvme_set_options", 00:23:05.637 "params": { 00:23:05.638 "action_on_timeout": "none", 00:23:05.638 "timeout_us": 0, 00:23:05.638 "timeout_admin_us": 0, 00:23:05.638 "keep_alive_timeout_ms": 10000, 00:23:05.638 "arbitration_burst": 0, 00:23:05.638 "low_priority_weight": 0, 00:23:05.638 "medium_priority_weight": 0, 00:23:05.638 "high_priority_weight": 0, 00:23:05.638 "nvme_adminq_poll_period_us": 10000, 00:23:05.638 "nvme_ioq_poll_period_us": 0, 00:23:05.638 "io_queue_requests": 512, 00:23:05.638 "delay_cmd_submit": true, 00:23:05.638 "transport_retry_count": 4, 00:23:05.638 "bdev_retry_count": 3, 00:23:05.638 "transport_ack_timeout": 0, 00:23:05.638 "ctrlr_loss_timeout_sec": 0, 00:23:05.638 "reconnect_delay_sec": 0, 00:23:05.638 "fast_io_fail_timeout_sec": 0, 00:23:05.638 "disable_auto_failback": false, 00:23:05.638 "generate_uuids": false, 00:23:05.638 "transport_tos": 0, 00:23:05.638 "nvme_error_stat": false, 00:23:05.638 "rdma_srq_size": 0, 00:23:05.638 "io_path_stat": false, 00:23:05.638 "allow_accel_sequence": false, 00:23:05.638 "rdma_max_cq_size": 0, 00:23:05.638 "rdma_cm_event_timeout_ms": 0, 00:23:05.638 "dhchap_digests": [ 00:23:05.638 "sha256", 00:23:05.638 "sha384", 00:23:05.638 "sha512" 00:23:05.638 ], 00:23:05.638 "dhchap_dhgroups": [ 00:23:05.638 "null", 00:23:05.638 "ffdhe2048", 00:23:05.638 "ffdhe3072", 00:23:05.638 "ffdhe4096", 00:23:05.638 "ffdhe6144", 00:23:05.638 "ffdhe8192" 00:23:05.638 ] 00:23:05.638 } 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "method": "bdev_nvme_attach_controller", 00:23:05.638 "params": { 00:23:05.638 "name": "nvme0", 00:23:05.638 "trtype": "TCP", 00:23:05.638 "adrfam": "IPv4", 00:23:05.638 "traddr": "10.0.0.2", 00:23:05.638 "trsvcid": "4420", 00:23:05.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.638 "prchk_reftag": false, 00:23:05.638 "prchk_guard": false, 00:23:05.638 "ctrlr_loss_timeout_sec": 0, 00:23:05.638 "reconnect_delay_sec": 0, 00:23:05.638 "fast_io_fail_timeout_sec": 0, 00:23:05.638 "psk": "key0", 00:23:05.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.638 "hdgst": false, 00:23:05.638 "ddgst": false 00:23:05.638 } 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "method": "bdev_nvme_set_hotplug", 00:23:05.638 "params": { 00:23:05.638 "period_us": 100000, 00:23:05.638 "enable": false 00:23:05.638 } 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "method": "bdev_enable_histogram", 00:23:05.638 "params": { 00:23:05.638 "name": "nvme0n1", 00:23:05.638 "enable": true 00:23:05.638 } 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "method": "bdev_wait_for_examine" 00:23:05.638 } 00:23:05.638 ] 00:23:05.638 }, 00:23:05.638 { 00:23:05.638 "subsystem": "nbd", 00:23:05.638 "config": [] 00:23:05.638 } 00:23:05.638 ] 00:23:05.638 }' 00:23:05.638 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.638 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.638 15:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.638 [2024-07-13 15:34:36.178065] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:05.638 [2024-07-13 15:34:36.178155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149319 ] 00:23:05.638 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.638 [2024-07-13 15:34:36.209593] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:05.638 [2024-07-13 15:34:36.237268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.638 [2024-07-13 15:34:36.324935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.898 [2024-07-13 15:34:36.504796] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.464 15:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.464 15:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:06.464 15:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.464 15:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:06.722 15:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.722 15:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.722 Running I/O for 1 seconds... 00:23:08.101 00:23:08.101 Latency(us) 00:23:08.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.101 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:08.101 Verification LBA range: start 0x0 length 0x2000 00:23:08.101 nvme0n1 : 1.05 2296.65 8.97 0.00 0.00 54629.19 8543.95 94371.84 00:23:08.101 =================================================================================================================== 00:23:08.101 Total : 2296.65 8.97 0.00 0.00 54629.19 8543.95 94371.84 00:23:08.101 0 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:08.101 nvmf_trace.0 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1149319 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1149319 ']' 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1149319 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149319 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149319' 00:23:08.101 killing process with pid 1149319 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1149319 00:23:08.101 Received shutdown signal, test time was about 1.000000 seconds 00:23:08.101 00:23:08.101 Latency(us) 00:23:08.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.101 =================================================================================================================== 00:23:08.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.101 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1149319 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.361 rmmod nvme_tcp 00:23:08.361 rmmod nvme_fabrics 00:23:08.361 rmmod nvme_keyring 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1149169 ']' 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1149169 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1149169 ']' 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1149169 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149169 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149169' 00:23:08.361 killing process with pid 1149169 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1149169 00:23:08.361 15:34:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1149169 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.642 15:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.538 15:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.538 15:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ptPptYhALS /tmp/tmp.81pk09EtlG /tmp/tmp.lWhTPabYLA 00:23:10.538 00:23:10.538 real 1m18.980s 00:23:10.538 user 1m59.311s 00:23:10.538 sys 0m27.732s 00:23:10.538 15:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:10.538 15:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.538 ************************************ 00:23:10.538 END TEST nvmf_tls 00:23:10.538 ************************************ 00:23:10.538 15:34:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:10.538 15:34:41 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:10.538 15:34:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:10.538 15:34:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:10.538 15:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:10.538 ************************************ 00:23:10.538 START TEST nvmf_fips 00:23:10.538 ************************************ 00:23:10.538 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:10.797 * Looking for test storage... 00:23:10.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:10.797 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:10.798 Error setting digest 00:23:10.798 00E2C5B9BB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:10.798 00E2C5B9BB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.798 15:34:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:13.341 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:13.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.341 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:13.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:13.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:23:13.342 00:23:13.342 --- 10.0.0.2 ping statistics --- 00:23:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.342 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:23:13.342 00:23:13.342 --- 10.0.0.1 ping statistics --- 00:23:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.342 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1151678 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1151678 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1151678 ']' 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.342 15:34:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.342 [2024-07-13 15:34:43.866080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:13.342 [2024-07-13 15:34:43.866163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.342 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.342 [2024-07-13 15:34:43.902465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:13.342 [2024-07-13 15:34:43.930410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.342 [2024-07-13 15:34:44.021564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.342 [2024-07-13 15:34:44.021622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.342 [2024-07-13 15:34:44.021635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.342 [2024-07-13 15:34:44.021647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.342 [2024-07-13 15:34:44.021656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.342 [2024-07-13 15:34:44.021682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:13.601 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:13.859 [2024-07-13 15:34:44.415284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.859 [2024-07-13 15:34:44.431261] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.859 [2024-07-13 15:34:44.431465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.859 [2024-07-13 15:34:44.462535] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:13.859 malloc0 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1151719 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1151719 /var/tmp/bdevperf.sock 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1151719 ']' 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.859 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:13.859 [2024-07-13 15:34:44.555878] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:23:13.859 [2024-07-13 15:34:44.555980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151719 ] 00:23:13.859 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.859 [2024-07-13 15:34:44.587768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:13.859 [2024-07-13 15:34:44.614376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.117 [2024-07-13 15:34:44.697711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.117 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.117 15:34:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:14.117 15:34:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:14.378 [2024-07-13 15:34:45.086421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.378 [2024-07-13 15:34:45.086540] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:14.637 TLSTESTn1 00:23:14.637 15:34:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.637 Running I/O for 10 seconds... 00:23:24.628 00:23:24.628 Latency(us) 00:23:24.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.628 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.628 Verification LBA range: start 0x0 length 0x2000 00:23:24.628 TLSTESTn1 : 10.05 2431.06 9.50 0.00 0.00 52508.40 6213.78 77672.30 00:23:24.628 =================================================================================================================== 00:23:24.628 Total : 2431.06 9.50 0.00 0.00 52508.40 6213.78 77672.30 00:23:24.628 0 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:24.628 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:24.889 nvmf_trace.0 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1151719 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1151719 ']' 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1151719 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151719 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151719' 00:23:24.889 killing process with pid 1151719 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1151719 00:23:24.889 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.889 00:23:24.889 Latency(us) 00:23:24.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.889 =================================================================================================================== 00:23:24.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.889 [2024-07-13 15:34:55.493898] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:24.889 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1151719 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.150 rmmod nvme_tcp 00:23:25.150 rmmod nvme_fabrics 00:23:25.150 rmmod nvme_keyring 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1151678 ']' 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1151678 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1151678 ']' 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1151678 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151678 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151678' 00:23:25.150 killing process with pid 1151678 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1151678 00:23:25.150 [2024-07-13 15:34:55.802023] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:25.150 15:34:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1151678 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.410 15:34:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.946 15:34:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.947 00:23:27.947 real 0m16.796s 00:23:27.947 user 0m20.513s 00:23:27.947 sys 0m6.670s 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.947 ************************************ 00:23:27.947 END TEST nvmf_fips 00:23:27.947 ************************************ 00:23:27.947 15:34:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:27.947 15:34:58 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:27.947 15:34:58 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:27.947 15:34:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:27.947 15:34:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.947 15:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.947 ************************************ 00:23:27.947 START TEST nvmf_fuzz 00:23:27.947 ************************************ 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:27.947 * Looking for test storage... 00:23:27.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.947 15:34:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:29.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:29.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:29.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:29.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:23:29.873 00:23:29.873 --- 10.0.0.2 ping statistics --- 00:23:29.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.873 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:23:29.873 00:23:29.873 --- 10.0.0.1 ping statistics --- 00:23:29.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.873 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1154953 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1154953 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1154953 ']' 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.873 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.132 Malloc0 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:30.132 15:35:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:02.216 Fuzzing completed. Shutting down the fuzz application 00:24:02.216 00:24:02.216 Dumping successful admin opcodes: 00:24:02.216 8, 9, 10, 24, 00:24:02.217 Dumping successful io opcodes: 00:24:02.217 0, 9, 00:24:02.217 NS: 0x200003aeff00 I/O qp, Total commands completed: 481013, total successful commands: 2778, random_seed: 2720062848 00:24:02.217 NS: 0x200003aeff00 admin qp, Total commands completed: 59312, total successful commands: 470, random_seed: 3922258688 00:24:02.217 15:35:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:02.217 Fuzzing completed. Shutting down the fuzz application 00:24:02.217 00:24:02.217 Dumping successful admin opcodes: 00:24:02.217 24, 00:24:02.217 Dumping successful io opcodes: 00:24:02.217 00:24:02.217 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1972096783 00:24:02.217 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1972221793 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.217 rmmod nvme_tcp 00:24:02.217 rmmod nvme_fabrics 00:24:02.217 rmmod nvme_keyring 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1154953 ']' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1154953 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1154953 ']' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1154953 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1154953 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1154953' 00:24:02.217 killing process with pid 1154953 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1154953 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1154953 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.217 15:35:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.123 15:35:34 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.123 15:35:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:04.123 00:24:04.123 real 0m36.698s 00:24:04.123 user 0m50.907s 00:24:04.123 sys 0m14.600s 00:24:04.123 15:35:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.123 15:35:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:04.123 ************************************ 00:24:04.123 END TEST nvmf_fuzz 00:24:04.123 ************************************ 00:24:04.123 15:35:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:04.123 15:35:34 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:04.123 15:35:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.123 15:35:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.123 15:35:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.123 ************************************ 00:24:04.123 START TEST nvmf_multiconnection 00:24:04.123 ************************************ 00:24:04.123 15:35:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:04.382 * Looking for test storage... 00:24:04.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.382 15:35:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.383 15:35:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.288 15:35:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.288 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:24:06.548 00:24:06.548 --- 10.0.0.2 ping statistics --- 00:24:06.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.548 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:24:06.548 00:24:06.548 --- 10.0.0.1 ping statistics --- 00:24:06.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.548 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1160558 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1160558 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1160558 ']' 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.548 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.548 [2024-07-13 15:35:37.201169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:24:06.548 [2024-07-13 15:35:37.201258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.548 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.548 [2024-07-13 15:35:37.243694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:06.548 [2024-07-13 15:35:37.275337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.808 [2024-07-13 15:35:37.372404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.808 [2024-07-13 15:35:37.372462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.808 [2024-07-13 15:35:37.372478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.808 [2024-07-13 15:35:37.372492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.808 [2024-07-13 15:35:37.372504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.808 [2024-07-13 15:35:37.372630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.808 [2024-07-13 15:35:37.372684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.808 [2024-07-13 15:35:37.372759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.808 [2024-07-13 15:35:37.372763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.808 [2024-07-13 15:35:37.529548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:06.808 Malloc1 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.808 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 [2024-07-13 15:35:37.587060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 Malloc2 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 Malloc3 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 Malloc4 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 Malloc5 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:07.069 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.070 Malloc6 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.070 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 Malloc7 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 Malloc8 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 Malloc9 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 Malloc10 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 Malloc11 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.330 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.331 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:08.265 15:35:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:08.265 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:08.265 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:08.265 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:08.265 15:35:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.170 15:35:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:10.740 15:35:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:10.740 15:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:10.740 15:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.740 15:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:10.740 15:35:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.274 15:35:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:13.554 15:35:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:13.554 15:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:13.554 15:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.554 15:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:13.554 15:35:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.466 15:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:16.403 15:35:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:16.403 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:16.403 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.403 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:16.403 15:35:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.306 15:35:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:18.873 15:35:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:18.873 15:35:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:18.873 15:35:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.873 15:35:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:18.873 15:35:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.402 15:35:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:21.971 15:35:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:21.971 15:35:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:21.971 15:35:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.971 15:35:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:21.971 15:35:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.870 15:35:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:24.434 15:35:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:24.434 15:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:24.434 15:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.434 15:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:24.434 15:35:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.966 15:35:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:27.226 15:35:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:27.226 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:27.226 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.226 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:27.226 15:35:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.756 15:35:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:30.014 15:36:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:30.014 15:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:30.014 15:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.014 15:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:30.014 15:36:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.547 15:36:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:33.115 15:36:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:33.115 15:36:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:33.115 15:36:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.116 15:36:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:33.116 15:36:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.065 15:36:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:35.997 15:36:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:35.997 15:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:35.997 15:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.997 15:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:35.997 15:36:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:37.948 15:36:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:37.948 [global] 00:24:37.948 thread=1 00:24:37.948 invalidate=1 00:24:37.948 rw=read 00:24:37.948 time_based=1 00:24:37.948 runtime=10 00:24:37.948 ioengine=libaio 00:24:37.948 direct=1 00:24:37.948 bs=262144 00:24:37.948 iodepth=64 00:24:37.948 norandommap=1 00:24:37.948 numjobs=1 00:24:37.948 00:24:37.948 [job0] 00:24:37.948 filename=/dev/nvme0n1 00:24:37.948 [job1] 00:24:37.948 filename=/dev/nvme10n1 00:24:37.948 [job2] 00:24:37.948 filename=/dev/nvme1n1 00:24:37.948 [job3] 00:24:37.948 filename=/dev/nvme2n1 00:24:37.948 [job4] 00:24:37.948 filename=/dev/nvme3n1 00:24:37.948 [job5] 00:24:37.948 filename=/dev/nvme4n1 00:24:37.948 [job6] 00:24:37.948 filename=/dev/nvme5n1 00:24:37.948 [job7] 00:24:37.948 filename=/dev/nvme6n1 00:24:37.948 [job8] 00:24:37.948 filename=/dev/nvme7n1 00:24:37.948 [job9] 00:24:37.948 filename=/dev/nvme8n1 00:24:37.948 [job10] 00:24:37.948 filename=/dev/nvme9n1 00:24:37.948 Could not set queue depth (nvme0n1) 00:24:37.948 Could not set queue depth (nvme10n1) 00:24:37.948 Could not set queue depth (nvme1n1) 00:24:37.948 Could not set queue depth (nvme2n1) 00:24:37.948 Could not set queue depth (nvme3n1) 00:24:37.948 Could not set queue depth (nvme4n1) 00:24:37.948 Could not set queue depth (nvme5n1) 00:24:37.948 Could not set queue depth (nvme6n1) 00:24:37.948 Could not set queue depth (nvme7n1) 00:24:37.948 Could not set queue depth (nvme8n1) 00:24:37.948 Could not set queue depth (nvme9n1) 00:24:38.204 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.204 fio-3.35 00:24:38.204 Starting 11 threads 00:24:50.410 00:24:50.410 job0: (groupid=0, jobs=1): err= 0: pid=1165406: Sat Jul 13 15:36:19 2024 00:24:50.410 read: IOPS=602, BW=151MiB/s (158MB/s)(1522MiB/10102msec) 00:24:50.410 slat (usec): min=9, max=52985, avg=1349.84, stdev=4237.26 00:24:50.410 clat (usec): min=1658, max=252013, avg=104784.16, stdev=38151.09 00:24:50.410 lat (usec): min=1678, max=252040, avg=106134.00, stdev=38704.72 00:24:50.410 clat percentiles (msec): 00:24:50.410 | 1.00th=[ 7], 5.00th=[ 44], 10.00th=[ 60], 20.00th=[ 73], 00:24:50.410 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 105], 60.00th=[ 121], 00:24:50.410 | 70.00th=[ 132], 80.00th=[ 140], 90.00th=[ 150], 95.00th=[ 161], 00:24:50.410 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 239], 00:24:50.410 | 99.99th=[ 253] 00:24:50.410 bw ( KiB/s): min=98304, max=234496, per=8.61%, avg=154180.60, stdev=42009.68, samples=20 00:24:50.410 iops : min= 384, max= 916, avg=602.20, stdev=164.09, samples=20 00:24:50.410 lat (msec) : 2=0.02%, 4=0.33%, 10=1.12%, 20=0.87%, 50=4.16% 00:24:50.411 lat (msec) : 100=40.63%, 250=52.85%, 500=0.03% 00:24:50.411 cpu : usr=0.32%, sys=2.00%, ctx=1356, majf=0, minf=3721 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=6087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job1: (groupid=0, jobs=1): err= 0: pid=1165408: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=587, BW=147MiB/s (154MB/s)(1488MiB/10124msec) 00:24:50.411 slat (usec): min=10, max=146394, avg=1021.68, stdev=4528.29 00:24:50.411 clat (msec): min=2, max=272, avg=107.78, stdev=43.59 00:24:50.411 lat (msec): min=2, max=315, avg=108.80, stdev=44.27 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 12], 5.00th=[ 39], 10.00th=[ 57], 20.00th=[ 74], 00:24:50.411 | 30.00th=[ 85], 40.00th=[ 95], 50.00th=[ 106], 60.00th=[ 116], 00:24:50.411 | 70.00th=[ 130], 80.00th=[ 144], 90.00th=[ 161], 95.00th=[ 176], 00:24:50.411 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 266], 99.95th=[ 271], 00:24:50.411 | 99.99th=[ 275] 00:24:50.411 bw ( KiB/s): min=86528, max=268800, per=8.41%, avg=150690.90, stdev=47400.07, samples=20 00:24:50.411 iops : min= 338, max= 1050, avg=588.60, stdev=185.13, samples=20 00:24:50.411 lat (msec) : 4=0.39%, 10=0.47%, 20=1.70%, 50=5.26%, 100=36.88% 00:24:50.411 lat (msec) : 250=55.15%, 500=0.15% 00:24:50.411 cpu : usr=0.32%, sys=1.82%, ctx=1568, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=5951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job2: (groupid=0, jobs=1): err= 0: pid=1165410: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=794, BW=199MiB/s (208MB/s)(2018MiB/10164msec) 00:24:50.411 slat (usec): min=9, max=124414, avg=1121.10, stdev=3509.71 00:24:50.411 clat (usec): min=1264, max=315564, avg=79393.99, stdev=38913.66 00:24:50.411 lat (usec): min=1280, max=315613, avg=80515.08, stdev=39277.73 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 43], 20.00th=[ 55], 00:24:50.411 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 81], 00:24:50.411 | 70.00th=[ 89], 80.00th=[ 102], 90.00th=[ 126], 95.00th=[ 144], 00:24:50.411 | 99.00th=[ 253], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 317], 00:24:50.411 | 99.99th=[ 317] 00:24:50.411 bw ( KiB/s): min=120079, max=352768, per=11.45%, avg=204994.30, stdev=53781.04, samples=20 00:24:50.411 iops : min= 469, max= 1378, avg=800.75, stdev=210.08, samples=20 00:24:50.411 lat (msec) : 2=0.07%, 4=0.24%, 10=1.80%, 20=2.28%, 50=11.36% 00:24:50.411 lat (msec) : 100=63.91%, 250=19.29%, 500=1.05% 00:24:50.411 cpu : usr=0.37%, sys=2.73%, ctx=1655, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=8072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job3: (groupid=0, jobs=1): err= 0: pid=1165411: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=553, BW=138MiB/s (145MB/s)(1407MiB/10163msec) 00:24:50.411 slat (usec): min=9, max=91000, avg=1288.42, stdev=4940.04 00:24:50.411 clat (msec): min=4, max=313, avg=114.18, stdev=47.47 00:24:50.411 lat (msec): min=4, max=313, avg=115.46, stdev=48.34 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 14], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 74], 00:24:50.411 | 30.00th=[ 86], 40.00th=[ 100], 50.00th=[ 111], 60.00th=[ 131], 00:24:50.411 | 70.00th=[ 144], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 188], 00:24:50.411 | 99.00th=[ 236], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 305], 00:24:50.411 | 99.99th=[ 313] 00:24:50.411 bw ( KiB/s): min=88064, max=294400, per=7.95%, avg=142421.90, stdev=52199.72, samples=20 00:24:50.411 iops : min= 344, max= 1150, avg=556.20, stdev=203.96, samples=20 00:24:50.411 lat (msec) : 10=0.37%, 20=1.30%, 50=6.65%, 100=32.92%, 250=58.21% 00:24:50.411 lat (msec) : 500=0.55% 00:24:50.411 cpu : usr=0.22%, sys=1.76%, ctx=1382, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=5628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job4: (groupid=0, jobs=1): err= 0: pid=1165413: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=600, BW=150MiB/s (157MB/s)(1520MiB/10123msec) 00:24:50.411 slat (usec): min=12, max=87911, avg=1637.07, stdev=4515.75 00:24:50.411 clat (msec): min=4, max=270, avg=104.81, stdev=35.31 00:24:50.411 lat (msec): min=4, max=270, avg=106.45, stdev=35.85 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 37], 5.00th=[ 56], 10.00th=[ 66], 20.00th=[ 75], 00:24:50.411 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 110], 00:24:50.411 | 70.00th=[ 128], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 163], 00:24:50.411 | 99.00th=[ 184], 99.50th=[ 209], 99.90th=[ 257], 99.95th=[ 257], 00:24:50.411 | 99.99th=[ 271] 00:24:50.411 bw ( KiB/s): min=95232, max=274432, per=8.60%, avg=154022.55, stdev=46126.41, samples=20 00:24:50.411 iops : min= 372, max= 1072, avg=601.60, stdev=180.14, samples=20 00:24:50.411 lat (msec) : 10=0.23%, 50=2.75%, 100=49.35%, 250=47.56%, 500=0.12% 00:24:50.411 cpu : usr=0.36%, sys=2.05%, ctx=1243, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=6081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job5: (groupid=0, jobs=1): err= 0: pid=1165415: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=767, BW=192MiB/s (201MB/s)(1940MiB/10102msec) 00:24:50.411 slat (usec): min=8, max=83794, avg=1162.11, stdev=3882.76 00:24:50.411 clat (msec): min=2, max=262, avg=82.11, stdev=53.90 00:24:50.411 lat (msec): min=2, max=262, avg=83.28, stdev=54.70 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 35], 00:24:50.411 | 30.00th=[ 37], 40.00th=[ 45], 50.00th=[ 67], 60.00th=[ 88], 00:24:50.411 | 70.00th=[ 111], 80.00th=[ 140], 90.00th=[ 159], 95.00th=[ 174], 00:24:50.411 | 99.00th=[ 232], 99.50th=[ 243], 99.90th=[ 259], 99.95th=[ 262], 00:24:50.411 | 99.99th=[ 264] 00:24:50.411 bw ( KiB/s): min=73728, max=433152, per=11.00%, avg=196952.70, stdev=108190.17, samples=20 00:24:50.411 iops : min= 288, max= 1692, avg=769.25, stdev=422.62, samples=20 00:24:50.411 lat (msec) : 4=0.06%, 10=1.65%, 20=2.68%, 50=37.56%, 100=22.75% 00:24:50.411 lat (msec) : 250=35.09%, 500=0.21% 00:24:50.411 cpu : usr=0.41%, sys=2.36%, ctx=1600, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=7758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job6: (groupid=0, jobs=1): err= 0: pid=1165424: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=533, BW=133MiB/s (140MB/s)(1351MiB/10123msec) 00:24:50.411 slat (usec): min=9, max=70000, avg=1327.53, stdev=4496.23 00:24:50.411 clat (msec): min=6, max=321, avg=118.49, stdev=41.94 00:24:50.411 lat (msec): min=6, max=321, avg=119.82, stdev=42.36 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 26], 5.00th=[ 51], 10.00th=[ 74], 20.00th=[ 85], 00:24:50.411 | 30.00th=[ 93], 40.00th=[ 104], 50.00th=[ 120], 60.00th=[ 132], 00:24:50.411 | 70.00th=[ 142], 80.00th=[ 150], 90.00th=[ 165], 95.00th=[ 182], 00:24:50.411 | 99.00th=[ 243], 99.50th=[ 266], 99.90th=[ 317], 99.95th=[ 317], 00:24:50.411 | 99.99th=[ 321] 00:24:50.411 bw ( KiB/s): min=93696, max=188928, per=7.63%, avg=136644.10, stdev=30218.28, samples=20 00:24:50.411 iops : min= 366, max= 738, avg=533.70, stdev=117.96, samples=20 00:24:50.411 lat (msec) : 10=0.15%, 20=0.61%, 50=4.26%, 100=32.17%, 250=62.06% 00:24:50.411 lat (msec) : 500=0.76% 00:24:50.411 cpu : usr=0.33%, sys=1.59%, ctx=1348, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=5403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job7: (groupid=0, jobs=1): err= 0: pid=1165434: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=653, BW=163MiB/s (171MB/s)(1649MiB/10101msec) 00:24:50.411 slat (usec): min=9, max=71470, avg=992.61, stdev=3920.21 00:24:50.411 clat (usec): min=966, max=243135, avg=96932.61, stdev=46311.37 00:24:50.411 lat (usec): min=984, max=243160, avg=97925.22, stdev=46869.44 00:24:50.411 clat percentiles (msec): 00:24:50.411 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 32], 20.00th=[ 60], 00:24:50.411 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 110], 00:24:50.411 | 70.00th=[ 133], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 167], 00:24:50.411 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 220], 99.95th=[ 226], 00:24:50.411 | 99.99th=[ 243] 00:24:50.411 bw ( KiB/s): min=97280, max=259072, per=9.34%, avg=167241.45, stdev=49167.26, samples=20 00:24:50.411 iops : min= 380, max= 1012, avg=653.20, stdev=192.08, samples=20 00:24:50.411 lat (usec) : 1000=0.03% 00:24:50.411 lat (msec) : 2=0.44%, 4=0.65%, 10=2.18%, 20=3.32%, 50=8.40% 00:24:50.411 lat (msec) : 100=40.72%, 250=44.26% 00:24:50.411 cpu : usr=0.37%, sys=2.06%, ctx=1651, majf=0, minf=4097 00:24:50.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:50.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.411 issued rwts: total=6597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.411 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.411 job8: (groupid=0, jobs=1): err= 0: pid=1165467: Sat Jul 13 15:36:19 2024 00:24:50.411 read: IOPS=688, BW=172MiB/s (180MB/s)(1743MiB/10127msec) 00:24:50.411 slat (usec): min=9, max=124728, avg=866.53, stdev=4428.84 00:24:50.411 clat (usec): min=1005, max=299541, avg=92040.39, stdev=56220.06 00:24:50.411 lat (usec): min=1063, max=303317, avg=92906.93, stdev=56914.55 00:24:50.411 clat percentiles (msec): 00:24:50.412 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 32], 00:24:50.412 | 30.00th=[ 57], 40.00th=[ 73], 50.00th=[ 91], 60.00th=[ 107], 00:24:50.412 | 70.00th=[ 128], 80.00th=[ 144], 90.00th=[ 165], 95.00th=[ 184], 00:24:50.412 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 279], 00:24:50.412 | 99.99th=[ 300] 00:24:50.412 bw ( KiB/s): min=91136, max=344576, per=9.87%, avg=176790.00, stdev=69819.09, samples=20 00:24:50.412 iops : min= 356, max= 1346, avg=690.55, stdev=272.74, samples=20 00:24:50.412 lat (msec) : 2=0.20%, 4=0.50%, 10=3.54%, 20=7.69%, 50=15.15% 00:24:50.412 lat (msec) : 100=28.55%, 250=44.30%, 500=0.07% 00:24:50.412 cpu : usr=0.36%, sys=1.83%, ctx=1854, majf=0, minf=4097 00:24:50.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:50.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.412 issued rwts: total=6971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.412 job9: (groupid=0, jobs=1): err= 0: pid=1165496: Sat Jul 13 15:36:19 2024 00:24:50.412 read: IOPS=653, BW=163MiB/s (171MB/s)(1637MiB/10017msec) 00:24:50.412 slat (usec): min=9, max=101701, avg=1012.14, stdev=3881.56 00:24:50.412 clat (usec): min=1189, max=215241, avg=96851.47, stdev=45916.88 00:24:50.412 lat (usec): min=1213, max=240319, avg=97863.61, stdev=46309.58 00:24:50.412 clat percentiles (msec): 00:24:50.412 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 51], 00:24:50.412 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 102], 60.00th=[ 113], 00:24:50.412 | 70.00th=[ 126], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 165], 00:24:50.412 | 99.00th=[ 188], 99.50th=[ 199], 99.90th=[ 207], 99.95th=[ 211], 00:24:50.412 | 99.99th=[ 215] 00:24:50.412 bw ( KiB/s): min=111104, max=307200, per=9.27%, avg=165972.25, stdev=51364.35, samples=20 00:24:50.412 iops : min= 434, max= 1200, avg=648.20, stdev=200.72, samples=20 00:24:50.412 lat (msec) : 2=0.06%, 4=0.52%, 10=3.18%, 20=4.34%, 50=11.43% 00:24:50.412 lat (msec) : 100=29.56%, 250=50.92% 00:24:50.412 cpu : usr=0.21%, sys=1.98%, ctx=1659, majf=0, minf=4097 00:24:50.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:50.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.412 issued rwts: total=6547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.412 job10: (groupid=0, jobs=1): err= 0: pid=1165511: Sat Jul 13 15:36:19 2024 00:24:50.412 read: IOPS=595, BW=149MiB/s (156MB/s)(1503MiB/10103msec) 00:24:50.412 slat (usec): min=9, max=129689, avg=948.48, stdev=4295.85 00:24:50.412 clat (msec): min=3, max=298, avg=106.50, stdev=50.09 00:24:50.412 lat (msec): min=3, max=298, avg=107.45, stdev=50.55 00:24:50.412 clat percentiles (msec): 00:24:50.412 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 62], 00:24:50.412 | 30.00th=[ 73], 40.00th=[ 87], 50.00th=[ 103], 60.00th=[ 120], 00:24:50.412 | 70.00th=[ 142], 80.00th=[ 155], 90.00th=[ 167], 95.00th=[ 182], 00:24:50.412 | 99.00th=[ 234], 99.50th=[ 255], 99.90th=[ 300], 99.95th=[ 300], 00:24:50.412 | 99.99th=[ 300] 00:24:50.412 bw ( KiB/s): min=98816, max=270848, per=8.50%, avg=152265.50, stdev=42823.60, samples=20 00:24:50.412 iops : min= 386, max= 1058, avg=594.75, stdev=167.27, samples=20 00:24:50.412 lat (msec) : 4=0.22%, 10=0.90%, 20=1.76%, 50=8.42%, 100=37.06% 00:24:50.412 lat (msec) : 250=50.80%, 500=0.85% 00:24:50.412 cpu : usr=0.27%, sys=1.65%, ctx=1604, majf=0, minf=4097 00:24:50.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:50.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.412 issued rwts: total=6012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.412 00:24:50.412 Run status group 0 (all jobs): 00:24:50.412 READ: bw=1749MiB/s (1834MB/s), 133MiB/s-199MiB/s (140MB/s-208MB/s), io=17.4GiB (18.6GB), run=10017-10164msec 00:24:50.412 00:24:50.412 Disk stats (read/write): 00:24:50.412 nvme0n1: ios=11990/0, merge=0/0, ticks=1233352/0, in_queue=1233352, util=97.01% 00:24:50.412 nvme10n1: ios=11725/0, merge=0/0, ticks=1237778/0, in_queue=1237778, util=97.23% 00:24:50.412 nvme1n1: ios=15993/0, merge=0/0, ticks=1232973/0, in_queue=1232973, util=97.49% 00:24:50.412 nvme2n1: ios=11071/0, merge=0/0, ticks=1233591/0, in_queue=1233591, util=97.63% 00:24:50.412 nvme3n1: ios=11972/0, merge=0/0, ticks=1227380/0, in_queue=1227380, util=97.72% 00:24:50.412 nvme4n1: ios=15243/0, merge=0/0, ticks=1232988/0, in_queue=1232988, util=98.05% 00:24:50.412 nvme5n1: ios=10605/0, merge=0/0, ticks=1234090/0, in_queue=1234090, util=98.22% 00:24:50.412 nvme6n1: ios=12979/0, merge=0/0, ticks=1237345/0, in_queue=1237345, util=98.34% 00:24:50.412 nvme7n1: ios=13528/0, merge=0/0, ticks=1231636/0, in_queue=1231636, util=98.83% 00:24:50.412 nvme8n1: ios=12792/0, merge=0/0, ticks=1242923/0, in_queue=1242923, util=99.04% 00:24:50.412 nvme9n1: ios=11830/0, merge=0/0, ticks=1235280/0, in_queue=1235280, util=99.21% 00:24:50.412 15:36:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:50.412 [global] 00:24:50.412 thread=1 00:24:50.412 invalidate=1 00:24:50.412 rw=randwrite 00:24:50.412 time_based=1 00:24:50.412 runtime=10 00:24:50.412 ioengine=libaio 00:24:50.412 direct=1 00:24:50.412 bs=262144 00:24:50.412 iodepth=64 00:24:50.412 norandommap=1 00:24:50.412 numjobs=1 00:24:50.412 00:24:50.412 [job0] 00:24:50.412 filename=/dev/nvme0n1 00:24:50.412 [job1] 00:24:50.412 filename=/dev/nvme10n1 00:24:50.412 [job2] 00:24:50.412 filename=/dev/nvme1n1 00:24:50.412 [job3] 00:24:50.412 filename=/dev/nvme2n1 00:24:50.412 [job4] 00:24:50.412 filename=/dev/nvme3n1 00:24:50.412 [job5] 00:24:50.412 filename=/dev/nvme4n1 00:24:50.412 [job6] 00:24:50.412 filename=/dev/nvme5n1 00:24:50.412 [job7] 00:24:50.412 filename=/dev/nvme6n1 00:24:50.412 [job8] 00:24:50.412 filename=/dev/nvme7n1 00:24:50.412 [job9] 00:24:50.412 filename=/dev/nvme8n1 00:24:50.412 [job10] 00:24:50.412 filename=/dev/nvme9n1 00:24:50.412 Could not set queue depth (nvme0n1) 00:24:50.412 Could not set queue depth (nvme10n1) 00:24:50.412 Could not set queue depth (nvme1n1) 00:24:50.412 Could not set queue depth (nvme2n1) 00:24:50.412 Could not set queue depth (nvme3n1) 00:24:50.412 Could not set queue depth (nvme4n1) 00:24:50.412 Could not set queue depth (nvme5n1) 00:24:50.412 Could not set queue depth (nvme6n1) 00:24:50.412 Could not set queue depth (nvme7n1) 00:24:50.412 Could not set queue depth (nvme8n1) 00:24:50.412 Could not set queue depth (nvme9n1) 00:24:50.412 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.412 fio-3.35 00:24:50.412 Starting 11 threads 00:25:00.395 00:25:00.395 job0: (groupid=0, jobs=1): err= 0: pid=1166598: Sat Jul 13 15:36:30 2024 00:25:00.395 write: IOPS=703, BW=176MiB/s (185MB/s)(1803MiB/10248msec); 0 zone resets 00:25:00.395 slat (usec): min=21, max=83454, avg=1045.52, stdev=3250.34 00:25:00.395 clat (usec): min=1183, max=521438, avg=89794.26, stdev=75390.45 00:25:00.395 lat (usec): min=1231, max=521470, avg=90839.78, stdev=76128.80 00:25:00.395 clat percentiles (msec): 00:25:00.395 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 43], 00:25:00.395 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 53], 00:25:00.395 | 70.00th=[ 113], 80.00th=[ 134], 90.00th=[ 215], 95.00th=[ 247], 00:25:00.395 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 493], 99.95th=[ 518], 00:25:00.395 | 99.99th=[ 523] 00:25:00.395 bw ( KiB/s): min=53248, max=377856, per=14.33%, avg=183014.40, stdev=116215.21, samples=20 00:25:00.395 iops : min= 208, max= 1476, avg=714.90, stdev=453.97, samples=20 00:25:00.395 lat (msec) : 2=0.11%, 4=0.29%, 10=1.01%, 20=1.69%, 50=56.33% 00:25:00.395 lat (msec) : 100=6.75%, 250=29.17%, 500=4.56%, 750=0.08% 00:25:00.395 cpu : usr=1.99%, sys=2.19%, ctx=3076, majf=0, minf=1 00:25:00.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:00.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.395 issued rwts: total=0,7213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.395 job1: (groupid=0, jobs=1): err= 0: pid=1166610: Sat Jul 13 15:36:30 2024 00:25:00.395 write: IOPS=788, BW=197MiB/s (207MB/s)(2018MiB/10241msec); 0 zone resets 00:25:00.395 slat (usec): min=19, max=104664, avg=983.97, stdev=2705.16 00:25:00.395 clat (usec): min=1996, max=581345, avg=80175.72, stdev=51962.76 00:25:00.395 lat (msec): min=2, max=581, avg=81.16, stdev=52.27 00:25:00.395 clat percentiles (msec): 00:25:00.395 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 43], 20.00th=[ 47], 00:25:00.395 | 30.00th=[ 52], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 79], 00:25:00.395 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 132], 95.00th=[ 176], 00:25:00.395 | 99.00th=[ 305], 99.50th=[ 342], 99.90th=[ 514], 99.95th=[ 575], 00:25:00.395 | 99.99th=[ 584] 00:25:00.395 bw ( KiB/s): min=128000, max=342016, per=16.04%, avg=204953.60, stdev=57068.84, samples=20 00:25:00.395 iops : min= 500, max= 1336, avg=800.60, stdev=222.93, samples=20 00:25:00.395 lat (msec) : 2=0.01%, 4=0.19%, 10=1.72%, 20=1.80%, 50=24.39% 00:25:00.395 lat (msec) : 100=55.28%, 250=14.99%, 500=1.45%, 750=0.17% 00:25:00.395 cpu : usr=2.46%, sys=2.62%, ctx=3190, majf=0, minf=1 00:25:00.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:00.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.395 issued rwts: total=0,8070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.395 job2: (groupid=0, jobs=1): err= 0: pid=1166611: Sat Jul 13 15:36:30 2024 00:25:00.395 write: IOPS=549, BW=137MiB/s (144MB/s)(1394MiB/10140msec); 0 zone resets 00:25:00.395 slat (usec): min=23, max=164985, avg=1498.16, stdev=4555.43 00:25:00.395 clat (msec): min=2, max=431, avg=114.84, stdev=71.77 00:25:00.395 lat (msec): min=2, max=431, avg=116.34, stdev=72.72 00:25:00.395 clat percentiles (msec): 00:25:00.395 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 62], 00:25:00.395 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 116], 00:25:00.395 | 70.00th=[ 159], 80.00th=[ 178], 90.00th=[ 211], 95.00th=[ 262], 00:25:00.395 | 99.00th=[ 300], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 430], 00:25:00.395 | 99.99th=[ 430] 00:25:00.395 bw ( KiB/s): min=61440, max=269824, per=11.04%, avg=141081.60, stdev=55390.10, samples=20 00:25:00.395 iops : min= 240, max= 1054, avg=551.10, stdev=216.37, samples=20 00:25:00.395 lat (msec) : 4=0.25%, 10=2.31%, 20=3.71%, 50=10.19%, 100=39.29% 00:25:00.395 lat (msec) : 250=38.05%, 500=6.19% 00:25:00.395 cpu : usr=1.67%, sys=1.98%, ctx=2629, majf=0, minf=1 00:25:00.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:00.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.395 issued rwts: total=0,5574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.395 job3: (groupid=0, jobs=1): err= 0: pid=1166612: Sat Jul 13 15:36:30 2024 00:25:00.395 write: IOPS=267, BW=66.8MiB/s (70.1MB/s)(678MiB/10143msec); 0 zone resets 00:25:00.395 slat (usec): min=25, max=555187, avg=2644.29, stdev=14997.85 00:25:00.395 clat (msec): min=3, max=2413, avg=236.59, stdev=296.09 00:25:00.395 lat (msec): min=3, max=2413, avg=239.24, stdev=296.91 00:25:00.395 clat percentiles (msec): 00:25:00.395 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 68], 20.00th=[ 131], 00:25:00.395 | 30.00th=[ 153], 40.00th=[ 180], 50.00th=[ 199], 60.00th=[ 222], 00:25:00.395 | 70.00th=[ 247], 80.00th=[ 268], 90.00th=[ 300], 95.00th=[ 363], 00:25:00.395 | 99.00th=[ 2366], 99.50th=[ 2400], 99.90th=[ 2400], 99.95th=[ 2400], 00:25:00.395 | 99.99th=[ 2400] 00:25:00.395 bw ( KiB/s): min= 1536, max=163328, per=5.30%, avg=67763.20, stdev=38951.43, samples=20 00:25:00.395 iops : min= 6, max= 638, avg=264.70, stdev=152.15, samples=20 00:25:00.395 lat (msec) : 4=0.04%, 10=0.77%, 20=2.58%, 50=5.24%, 100=4.06% 00:25:00.395 lat (msec) : 250=58.95%, 500=25.49%, 750=0.30%, 1000=0.07%, 2000=1.07% 00:25:00.395 lat (msec) : >=2000=1.44% 00:25:00.395 cpu : usr=0.88%, sys=0.90%, ctx=1361, majf=0, minf=1 00:25:00.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:00.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.395 issued rwts: total=0,2711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.395 job4: (groupid=0, jobs=1): err= 0: pid=1166613: Sat Jul 13 15:36:30 2024 00:25:00.395 write: IOPS=453, BW=113MiB/s (119MB/s)(1160MiB/10238msec); 0 zone resets 00:25:00.395 slat (usec): min=16, max=1287.9k, avg=1777.46, stdev=19292.59 00:25:00.395 clat (msec): min=3, max=1528, avg=139.36, stdev=172.51 00:25:00.395 lat (msec): min=4, max=1532, avg=141.13, stdev=174.08 00:25:00.395 clat percentiles (msec): 00:25:00.395 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 45], 20.00th=[ 71], 00:25:00.395 | 30.00th=[ 75], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 112], 00:25:00.395 | 70.00th=[ 140], 80.00th=[ 205], 90.00th=[ 247], 95.00th=[ 288], 00:25:00.395 | 99.00th=[ 1401], 99.50th=[ 1469], 99.90th=[ 1519], 99.95th=[ 1519], 00:25:00.395 | 99.99th=[ 1536] 00:25:00.395 bw ( KiB/s): min=51200, max=252416, per=10.19%, avg=130161.78, stdev=63779.65, samples=18 00:25:00.395 iops : min= 200, max= 986, avg=508.44, stdev=249.14, samples=18 00:25:00.395 lat (msec) : 4=0.02%, 10=0.60%, 20=2.24%, 50=8.97%, 100=44.81% 00:25:00.395 lat (msec) : 250=34.09%, 500=7.78%, 750=0.13%, 2000=1.36% 00:25:00.395 cpu : usr=1.16%, sys=1.45%, ctx=2218, majf=0, minf=1 00:25:00.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:00.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.395 issued rwts: total=0,4640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.395 job5: (groupid=0, jobs=1): err= 0: pid=1166614: Sat Jul 13 15:36:30 2024 00:25:00.395 write: IOPS=376, BW=94.1MiB/s (98.7MB/s)(962MiB/10217msec); 0 zone resets 00:25:00.395 slat (usec): min=26, max=73004, avg=2339.10, stdev=5093.10 00:25:00.395 clat (msec): min=8, max=494, avg=167.24, stdev=67.42 00:25:00.395 lat (msec): min=8, max=494, avg=169.58, stdev=68.20 00:25:00.395 clat percentiles (msec): 00:25:00.395 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 82], 20.00th=[ 118], 00:25:00.395 | 30.00th=[ 140], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 174], 00:25:00.395 | 70.00th=[ 186], 80.00th=[ 209], 90.00th=[ 255], 95.00th=[ 288], 00:25:00.395 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 481], 99.95th=[ 493], 00:25:00.395 | 99.99th=[ 493] 00:25:00.396 bw ( KiB/s): min=40960, max=167424, per=7.58%, avg=96870.40, stdev=31920.23, samples=20 00:25:00.396 iops : min= 160, max= 654, avg=378.40, stdev=124.69, samples=20 00:25:00.396 lat (msec) : 10=0.03%, 20=0.10%, 50=5.17%, 100=8.79%, 250=75.36% 00:25:00.396 lat (msec) : 500=10.55% 00:25:00.396 cpu : usr=1.30%, sys=1.18%, ctx=1434, majf=0, minf=1 00:25:00.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:00.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.396 issued rwts: total=0,3847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.396 job6: (groupid=0, jobs=1): err= 0: pid=1166615: Sat Jul 13 15:36:30 2024 00:25:00.396 write: IOPS=449, BW=112MiB/s (118MB/s)(1149MiB/10229msec); 0 zone resets 00:25:00.396 slat (usec): min=22, max=163791, avg=1911.44, stdev=5240.28 00:25:00.396 clat (msec): min=2, max=515, avg=140.49, stdev=84.31 00:25:00.396 lat (msec): min=2, max=515, avg=142.41, stdev=85.45 00:25:00.396 clat percentiles (msec): 00:25:00.396 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 53], 00:25:00.396 | 30.00th=[ 84], 40.00th=[ 111], 50.00th=[ 157], 60.00th=[ 171], 00:25:00.396 | 70.00th=[ 180], 80.00th=[ 203], 90.00th=[ 232], 95.00th=[ 275], 00:25:00.396 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 502], 99.95th=[ 502], 00:25:00.396 | 99.99th=[ 514] 00:25:00.396 bw ( KiB/s): min=40960, max=283136, per=9.08%, avg=116010.45, stdev=62490.30, samples=20 00:25:00.396 iops : min= 160, max= 1106, avg=453.15, stdev=244.09, samples=20 00:25:00.396 lat (msec) : 4=0.13%, 10=1.15%, 20=2.24%, 50=15.80%, 100=14.26% 00:25:00.396 lat (msec) : 250=60.14%, 500=6.14%, 750=0.13% 00:25:00.396 cpu : usr=1.56%, sys=1.58%, ctx=2099, majf=0, minf=1 00:25:00.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:00.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.396 issued rwts: total=0,4594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.396 job7: (groupid=0, jobs=1): err= 0: pid=1166616: Sat Jul 13 15:36:30 2024 00:25:00.396 write: IOPS=292, BW=73.1MiB/s (76.6MB/s)(748MiB/10232msec); 0 zone resets 00:25:00.396 slat (usec): min=26, max=561667, avg=2939.08, stdev=18712.53 00:25:00.396 clat (msec): min=7, max=2857, avg=215.93, stdev=359.31 00:25:00.396 lat (msec): min=8, max=2868, avg=218.87, stdev=362.81 00:25:00.396 clat percentiles (msec): 00:25:00.396 | 1.00th=[ 18], 5.00th=[ 46], 10.00th=[ 80], 20.00th=[ 97], 00:25:00.396 | 30.00th=[ 115], 40.00th=[ 124], 50.00th=[ 136], 60.00th=[ 157], 00:25:00.396 | 70.00th=[ 213], 80.00th=[ 251], 90.00th=[ 279], 95.00th=[ 351], 00:25:00.396 | 99.00th=[ 2702], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:00.396 | 99.99th=[ 2869] 00:25:00.396 bw ( KiB/s): min= 2048, max=178176, per=5.87%, avg=74936.30, stdev=53499.02, samples=20 00:25:00.396 iops : min= 8, max= 696, avg=292.70, stdev=208.99, samples=20 00:25:00.396 lat (msec) : 10=0.13%, 20=1.04%, 50=4.48%, 100=15.48%, 250=58.46% 00:25:00.396 lat (msec) : 500=17.29%, 750=0.47%, 1000=0.13%, 2000=0.67%, >=2000=1.84% 00:25:00.396 cpu : usr=0.75%, sys=1.05%, ctx=1388, majf=0, minf=1 00:25:00.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:00.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.396 issued rwts: total=0,2990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.396 job8: (groupid=0, jobs=1): err= 0: pid=1166617: Sat Jul 13 15:36:30 2024 00:25:00.396 write: IOPS=400, BW=100MiB/s (105MB/s)(1016MiB/10140msec); 0 zone resets 00:25:00.396 slat (usec): min=17, max=74143, avg=1642.61, stdev=4707.10 00:25:00.396 clat (usec): min=1614, max=492578, avg=157930.26, stdev=83604.22 00:25:00.396 lat (usec): min=1649, max=495214, avg=159572.87, stdev=84613.42 00:25:00.396 clat percentiles (msec): 00:25:00.396 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 84], 00:25:00.396 | 30.00th=[ 118], 40.00th=[ 153], 50.00th=[ 169], 60.00th=[ 178], 00:25:00.396 | 70.00th=[ 199], 80.00th=[ 224], 90.00th=[ 253], 95.00th=[ 271], 00:25:00.396 | 99.00th=[ 447], 99.50th=[ 477], 99.90th=[ 489], 99.95th=[ 489], 00:25:00.396 | 99.99th=[ 493] 00:25:00.396 bw ( KiB/s): min=57344, max=198656, per=8.02%, avg=102451.20, stdev=37007.78, samples=20 00:25:00.396 iops : min= 224, max= 776, avg=400.20, stdev=144.56, samples=20 00:25:00.396 lat (msec) : 2=0.05%, 4=0.52%, 10=2.53%, 20=3.35%, 50=6.74% 00:25:00.396 lat (msec) : 100=12.00%, 250=64.03%, 500=10.77% 00:25:00.396 cpu : usr=1.21%, sys=1.24%, ctx=2441, majf=0, minf=1 00:25:00.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:00.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.396 issued rwts: total=0,4065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.396 job9: (groupid=0, jobs=1): err= 0: pid=1166618: Sat Jul 13 15:36:30 2024 00:25:00.396 write: IOPS=416, BW=104MiB/s (109MB/s)(1051MiB/10087msec); 0 zone resets 00:25:00.396 slat (usec): min=22, max=543270, avg=1534.94, stdev=16133.55 00:25:00.396 clat (usec): min=1723, max=2859.0k, avg=151930.01, stdev=315299.27 00:25:00.396 lat (usec): min=1760, max=2859.1k, avg=153464.94, stdev=318535.68 00:25:00.396 clat percentiles (msec): 00:25:00.396 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 38], 00:25:00.396 | 30.00th=[ 55], 40.00th=[ 70], 50.00th=[ 90], 60.00th=[ 116], 00:25:00.396 | 70.00th=[ 153], 80.00th=[ 194], 90.00th=[ 247], 95.00th=[ 292], 00:25:00.396 | 99.00th=[ 2534], 99.50th=[ 2769], 99.90th=[ 2869], 99.95th=[ 2869], 00:25:00.396 | 99.99th=[ 2869] 00:25:00.396 bw ( KiB/s): min= 2048, max=285184, per=8.30%, avg=106016.05, stdev=76525.38, samples=20 00:25:00.396 iops : min= 8, max= 1114, avg=414.10, stdev=298.94, samples=20 00:25:00.396 lat (msec) : 2=0.10%, 4=0.45%, 10=3.40%, 20=5.99%, 50=16.44% 00:25:00.396 lat (msec) : 100=27.09%, 250=36.99%, 500=7.47%, 750=0.19%, 1000=0.10% 00:25:00.396 lat (msec) : 2000=0.48%, >=2000=1.31% 00:25:00.396 cpu : usr=1.38%, sys=1.42%, ctx=2979, majf=0, minf=1 00:25:00.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:00.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.396 issued rwts: total=0,4204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.396 job10: (groupid=0, jobs=1): err= 0: pid=1166619: Sat Jul 13 15:36:30 2024 00:25:00.396 write: IOPS=315, BW=78.9MiB/s (82.7MB/s)(808MiB/10241msec); 0 zone resets 00:25:00.396 slat (usec): min=22, max=167840, avg=2920.90, stdev=6761.38 00:25:00.396 clat (msec): min=3, max=452, avg=199.76, stdev=74.32 00:25:00.396 lat (msec): min=3, max=452, avg=202.68, stdev=74.85 00:25:00.396 clat percentiles (msec): 00:25:00.396 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 133], 20.00th=[ 167], 00:25:00.396 | 30.00th=[ 174], 40.00th=[ 188], 50.00th=[ 203], 60.00th=[ 220], 00:25:00.396 | 70.00th=[ 230], 80.00th=[ 245], 90.00th=[ 266], 95.00th=[ 300], 00:25:00.396 | 99.00th=[ 435], 99.50th=[ 443], 99.90th=[ 451], 99.95th=[ 451], 00:25:00.396 | 99.99th=[ 451] 00:25:00.396 bw ( KiB/s): min=59392, max=164864, per=6.35%, avg=81100.80, stdev=22644.31, samples=20 00:25:00.396 iops : min= 232, max= 644, avg=316.80, stdev=88.45, samples=20 00:25:00.396 lat (msec) : 4=0.06%, 10=0.40%, 20=2.26%, 50=5.11%, 100=0.68% 00:25:00.396 lat (msec) : 250=74.88%, 500=16.62% 00:25:00.396 cpu : usr=0.95%, sys=1.00%, ctx=1175, majf=0, minf=1 00:25:00.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:00.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:00.396 issued rwts: total=0,3232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:00.396 00:25:00.396 Run status group 0 (all jobs): 00:25:00.396 WRITE: bw=1248MiB/s (1308MB/s), 66.8MiB/s-197MiB/s (70.1MB/s-207MB/s), io=12.5GiB (13.4GB), run=10087-10248msec 00:25:00.396 00:25:00.396 Disk stats (read/write): 00:25:00.396 nvme0n1: ios=51/14364, merge=0/0, ticks=826/1237148, in_queue=1237974, util=99.90% 00:25:00.396 nvme10n1: ios=47/16087, merge=0/0, ticks=2353/1224184, in_queue=1226537, util=100.00% 00:25:00.396 nvme1n1: ios=48/10938, merge=0/0, ticks=1712/1203133, in_queue=1204845, util=100.00% 00:25:00.396 nvme2n1: ios=45/5227, merge=0/0, ticks=3367/1188287, in_queue=1191654, util=100.00% 00:25:00.396 nvme3n1: ios=0/9238, merge=0/0, ticks=0/1240759, in_queue=1240759, util=97.86% 00:25:00.396 nvme4n1: ios=45/7664, merge=0/0, ticks=761/1231170, in_queue=1231931, util=100.00% 00:25:00.396 nvme5n1: ios=0/9150, merge=0/0, ticks=0/1234032, in_queue=1234032, util=98.29% 00:25:00.396 nvme6n1: ios=0/5939, merge=0/0, ticks=0/1234097, in_queue=1234097, util=98.39% 00:25:00.396 nvme7n1: ios=40/7942, merge=0/0, ticks=856/1221534, in_queue=1222390, util=100.00% 00:25:00.396 nvme8n1: ios=42/8195, merge=0/0, ticks=1439/1186301, in_queue=1187740, util=100.00% 00:25:00.396 nvme9n1: ios=0/6417, merge=0/0, ticks=0/1225511, in_queue=1225511, util=99.13% 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:00.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:00.396 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:00.396 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.397 15:36:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:00.655 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.655 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:00.912 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:00.912 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:00.912 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.913 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:01.170 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:01.170 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.170 15:36:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:01.428 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.428 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:01.429 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.429 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:01.688 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.688 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:01.688 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:01.948 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.948 rmmod nvme_tcp 00:25:01.948 rmmod nvme_fabrics 00:25:01.948 rmmod nvme_keyring 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:01.948 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1160558 ']' 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1160558 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1160558 ']' 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1160558 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1160558 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1160558' 00:25:01.949 killing process with pid 1160558 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1160558 00:25:01.949 15:36:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1160558 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.520 15:36:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.056 15:36:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.056 00:25:05.056 real 1m0.344s 00:25:05.056 user 3m19.736s 00:25:05.056 sys 0m23.437s 00:25:05.056 15:36:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.056 15:36:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:05.056 ************************************ 00:25:05.056 END TEST nvmf_multiconnection 00:25:05.056 ************************************ 00:25:05.056 15:36:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:05.056 15:36:35 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:05.056 15:36:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.056 15:36:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.056 15:36:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:05.056 ************************************ 00:25:05.056 START TEST nvmf_initiator_timeout 00:25:05.056 ************************************ 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:05.056 * Looking for test storage... 00:25:05.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.056 15:36:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:06.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:06.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:06.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:06.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.516 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.517 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:06.776 00:25:06.776 --- 10.0.0.2 ping statistics --- 00:25:06.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.776 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:06.776 00:25:06.776 --- 10.0.0.1 ping statistics --- 00:25:06.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.776 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1169799 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1169799 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1169799 ']' 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:06.776 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:06.776 [2024-07-13 15:36:37.361256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:25:06.776 [2024-07-13 15:36:37.361331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.776 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.776 [2024-07-13 15:36:37.398347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:06.776 [2024-07-13 15:36:37.425421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.776 [2024-07-13 15:36:37.512218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.776 [2024-07-13 15:36:37.512270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.776 [2024-07-13 15:36:37.512298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.776 [2024-07-13 15:36:37.512309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.776 [2024-07-13 15:36:37.512319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.776 [2024-07-13 15:36:37.512472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.776 [2024-07-13 15:36:37.512539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.776 [2024-07-13 15:36:37.512590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.776 [2024-07-13 15:36:37.512592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 Malloc0 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 Delay0 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 [2024-07-13 15:36:37.707225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 [2024-07-13 15:36:37.735470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.035 15:36:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:07.972 15:36:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:07.972 15:36:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.972 15:36:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.972 15:36:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.972 15:36:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1170227 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:09.870 15:36:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:09.870 [global] 00:25:09.870 thread=1 00:25:09.870 invalidate=1 00:25:09.870 rw=write 00:25:09.870 time_based=1 00:25:09.870 runtime=60 00:25:09.870 ioengine=libaio 00:25:09.870 direct=1 00:25:09.870 bs=4096 00:25:09.870 iodepth=1 00:25:09.870 norandommap=0 00:25:09.870 numjobs=1 00:25:09.870 00:25:09.870 verify_dump=1 00:25:09.870 verify_backlog=512 00:25:09.870 verify_state_save=0 00:25:09.870 do_verify=1 00:25:09.870 verify=crc32c-intel 00:25:09.870 [job0] 00:25:09.870 filename=/dev/nvme0n1 00:25:09.870 Could not set queue depth (nvme0n1) 00:25:10.128 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:10.128 fio-3.35 00:25:10.128 Starting 1 thread 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.415 true 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.415 true 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.415 true 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:13.415 true 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.415 15:36:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:15.948 true 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:15.948 true 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:15.948 true 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:15.948 true 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.948 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:15.949 15:36:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1170227 00:26:12.196 00:26:12.196 job0: (groupid=0, jobs=1): err= 0: pid=1170300: Sat Jul 13 15:37:40 2024 00:26:12.196 read: IOPS=132, BW=528KiB/s (541kB/s)(31.0MiB/60017msec) 00:26:12.196 slat (usec): min=5, max=9318, avg=16.30, stdev=144.49 00:26:12.196 clat (usec): min=317, max=41116k, avg=7227.05, stdev=461818.85 00:26:12.196 lat (usec): min=327, max=41116k, avg=7243.35, stdev=461818.88 00:26:12.196 clat percentiles (usec): 00:26:12.196 | 1.00th=[ 347], 5.00th=[ 388], 10.00th=[ 404], 00:26:12.196 | 20.00th=[ 420], 30.00th=[ 429], 40.00th=[ 433], 00:26:12.196 | 50.00th=[ 441], 60.00th=[ 453], 70.00th=[ 461], 00:26:12.196 | 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 553], 00:26:12.196 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:12.196 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:12.196 write: IOPS=136, BW=546KiB/s (559kB/s)(32.0MiB/60017msec); 0 zone resets 00:26:12.196 slat (nsec): min=7014, max=85997, avg=19260.04, stdev=10797.99 00:26:12.196 clat (usec): min=210, max=4088, avg=287.97, stdev=74.04 00:26:12.196 lat (usec): min=218, max=4124, avg=307.23, stdev=79.19 00:26:12.196 clat percentiles (usec): 00:26:12.196 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:26:12.196 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 281], 60.00th=[ 293], 00:26:12.196 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 363], 95.00th=[ 383], 00:26:12.196 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 693], 00:26:12.196 | 99.99th=[ 4080] 00:26:12.196 bw ( KiB/s): min= 4096, max= 6624, per=100.00%, avg=5041.23, stdev=981.23, samples=13 00:26:12.196 iops : min= 1024, max= 1656, avg=1260.31, stdev=245.31, samples=13 00:26:12.196 lat (usec) : 250=16.35%, 500=77.68%, 750=4.01%, 1000=0.01% 00:26:12.196 lat (msec) : 4=0.01%, 10=0.01%, 50=1.92%, >=2000=0.01% 00:26:12.197 cpu : usr=0.31%, sys=0.61%, ctx=16122, majf=0, minf=2 00:26:12.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.197 issued rwts: total=7928,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:12.197 00:26:12.197 Run status group 0 (all jobs): 00:26:12.197 READ: bw=528KiB/s (541kB/s), 528KiB/s-528KiB/s (541kB/s-541kB/s), io=31.0MiB (32.5MB), run=60017-60017msec 00:26:12.197 WRITE: bw=546KiB/s (559kB/s), 546KiB/s-546KiB/s (559kB/s-559kB/s), io=32.0MiB (33.6MB), run=60017-60017msec 00:26:12.197 00:26:12.197 Disk stats (read/write): 00:26:12.197 nvme0n1: ios=8024/8192, merge=0/0, ticks=17052/2213, in_queue=19265, util=99.59% 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:12.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:12.197 nvmf hotplug test: fio successful as expected 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.197 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:12.198 rmmod nvme_tcp 00:26:12.198 rmmod nvme_fabrics 00:26:12.198 rmmod nvme_keyring 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1169799 ']' 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1169799 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1169799 ']' 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1169799 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1169799 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:12.198 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:12.199 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1169799' 00:26:12.199 killing process with pid 1169799 00:26:12.199 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1169799 00:26:12.199 15:37:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1169799 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.199 15:37:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.775 15:37:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:12.775 00:26:12.775 real 1m8.000s 00:26:12.775 user 4m10.929s 00:26:12.775 sys 0m6.690s 00:26:12.775 15:37:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:12.775 15:37:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:12.775 ************************************ 00:26:12.775 END TEST nvmf_initiator_timeout 00:26:12.775 ************************************ 00:26:12.775 15:37:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:12.775 15:37:43 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:12.775 15:37:43 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:12.775 15:37:43 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:12.775 15:37:43 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:12.775 15:37:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:14.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:14.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.685 15:37:45 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:14.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:14.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:14.686 15:37:45 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:14.686 15:37:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:14.686 15:37:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.686 15:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.686 ************************************ 00:26:14.686 START TEST nvmf_perf_adq 00:26:14.686 ************************************ 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:14.686 * Looking for test storage... 00:26:14.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.686 15:37:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:16.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:16.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:16.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:16.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:16.674 15:37:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:17.242 15:37:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:19.146 15:37:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:24.426 15:37:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:24.426 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.426 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.426 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:24.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:24.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:24.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:24.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.427 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:24.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:26:24.428 00:26:24.428 --- 10.0.0.2 ping statistics --- 00:26:24.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.428 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:26:24.428 00:26:24.428 --- 10.0.0.1 ping statistics --- 00:26:24.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.428 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1181806 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1181806 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1181806 ']' 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.428 15:37:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.428 [2024-07-13 15:37:55.021816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:24.428 [2024-07-13 15:37:55.021932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.428 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.428 [2024-07-13 15:37:55.060092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:24.428 [2024-07-13 15:37:55.092534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.428 [2024-07-13 15:37:55.183826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.428 [2024-07-13 15:37:55.183911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.428 [2024-07-13 15:37:55.183928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.428 [2024-07-13 15:37:55.183942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.428 [2024-07-13 15:37:55.183953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.428 [2024-07-13 15:37:55.184041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.428 [2024-07-13 15:37:55.184097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.428 [2024-07-13 15:37:55.184216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:24.428 [2024-07-13 15:37:55.184218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 [2024-07-13 15:37:55.405833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 Malloc1 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.687 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.946 [2024-07-13 15:37:55.459213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1181839 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:24.946 15:37:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:24.946 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:26.853 "tick_rate": 2700000000, 00:26:26.853 "poll_groups": [ 00:26:26.853 { 00:26:26.853 "name": "nvmf_tgt_poll_group_000", 00:26:26.853 "admin_qpairs": 1, 00:26:26.853 "io_qpairs": 1, 00:26:26.853 "current_admin_qpairs": 1, 00:26:26.853 "current_io_qpairs": 1, 00:26:26.853 "pending_bdev_io": 0, 00:26:26.853 "completed_nvme_io": 20498, 00:26:26.853 "transports": [ 00:26:26.853 { 00:26:26.853 "trtype": "TCP" 00:26:26.853 } 00:26:26.853 ] 00:26:26.853 }, 00:26:26.853 { 00:26:26.853 "name": "nvmf_tgt_poll_group_001", 00:26:26.853 "admin_qpairs": 0, 00:26:26.853 "io_qpairs": 1, 00:26:26.853 "current_admin_qpairs": 0, 00:26:26.853 "current_io_qpairs": 1, 00:26:26.853 "pending_bdev_io": 0, 00:26:26.853 "completed_nvme_io": 20483, 00:26:26.853 "transports": [ 00:26:26.853 { 00:26:26.853 "trtype": "TCP" 00:26:26.853 } 00:26:26.853 ] 00:26:26.853 }, 00:26:26.853 { 00:26:26.853 "name": "nvmf_tgt_poll_group_002", 00:26:26.853 "admin_qpairs": 0, 00:26:26.853 "io_qpairs": 1, 00:26:26.853 "current_admin_qpairs": 0, 00:26:26.853 "current_io_qpairs": 1, 00:26:26.853 "pending_bdev_io": 0, 00:26:26.853 "completed_nvme_io": 20334, 00:26:26.853 "transports": [ 00:26:26.853 { 00:26:26.853 "trtype": "TCP" 00:26:26.853 } 00:26:26.853 ] 00:26:26.853 }, 00:26:26.853 { 00:26:26.853 "name": "nvmf_tgt_poll_group_003", 00:26:26.853 "admin_qpairs": 0, 00:26:26.853 "io_qpairs": 1, 00:26:26.853 "current_admin_qpairs": 0, 00:26:26.853 "current_io_qpairs": 1, 00:26:26.853 "pending_bdev_io": 0, 00:26:26.853 "completed_nvme_io": 19414, 00:26:26.853 "transports": [ 00:26:26.853 { 00:26:26.853 "trtype": "TCP" 00:26:26.853 } 00:26:26.853 ] 00:26:26.853 } 00:26:26.853 ] 00:26:26.853 }' 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:26.853 15:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1181839 00:26:34.973 Initializing NVMe Controllers 00:26:34.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:34.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:34.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:34.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:34.974 Initialization complete. Launching workers. 00:26:34.974 ======================================================== 00:26:34.974 Latency(us) 00:26:34.974 Device Information : IOPS MiB/s Average min max 00:26:34.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10185.90 39.79 6283.24 2863.17 9055.29 00:26:34.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10744.50 41.97 5956.30 2943.57 8463.29 00:26:34.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10682.30 41.73 5992.86 1956.24 9180.22 00:26:34.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10747.20 41.98 5954.62 2741.00 9787.02 00:26:34.974 ======================================================== 00:26:34.974 Total : 42359.90 165.47 6043.71 1956.24 9787.02 00:26:34.974 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.974 rmmod nvme_tcp 00:26:34.974 rmmod nvme_fabrics 00:26:34.974 rmmod nvme_keyring 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1181806 ']' 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1181806 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1181806 ']' 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1181806 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181806 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181806' 00:26:34.974 killing process with pid 1181806 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1181806 00:26:34.974 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1181806 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.232 15:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.771 15:38:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.771 15:38:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:37.771 15:38:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:38.029 15:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:39.930 15:38:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:45.258 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:45.258 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.258 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:45.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:45.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:26:45.259 00:26:45.259 --- 10.0.0.2 ping statistics --- 00:26:45.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.259 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:26:45.259 00:26:45.259 --- 10.0.0.1 ping statistics --- 00:26:45.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.259 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:45.259 net.core.busy_poll = 1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:45.259 net.core.busy_read = 1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1184449 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1184449 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1184449 ']' 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.259 15:38:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.259 [2024-07-13 15:38:15.916046] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:26:45.259 [2024-07-13 15:38:15.916129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.259 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.259 [2024-07-13 15:38:15.954831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.259 [2024-07-13 15:38:15.987623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.519 [2024-07-13 15:38:16.079678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.519 [2024-07-13 15:38:16.079754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.519 [2024-07-13 15:38:16.079770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.519 [2024-07-13 15:38:16.079784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.519 [2024-07-13 15:38:16.079795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.519 [2024-07-13 15:38:16.079909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.519 [2024-07-13 15:38:16.079956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.519 [2024-07-13 15:38:16.080033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.519 [2024-07-13 15:38:16.080036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.519 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.520 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:45.520 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.520 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 [2024-07-13 15:38:16.310657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 Malloc1 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 [2024-07-13 15:38:16.361793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1184598 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:45.778 15:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:45.778 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:47.680 "tick_rate": 2700000000, 00:26:47.680 "poll_groups": [ 00:26:47.680 { 00:26:47.680 "name": "nvmf_tgt_poll_group_000", 00:26:47.680 "admin_qpairs": 1, 00:26:47.680 "io_qpairs": 2, 00:26:47.680 "current_admin_qpairs": 1, 00:26:47.680 "current_io_qpairs": 2, 00:26:47.680 "pending_bdev_io": 0, 00:26:47.680 "completed_nvme_io": 23173, 00:26:47.680 "transports": [ 00:26:47.680 { 00:26:47.680 "trtype": "TCP" 00:26:47.680 } 00:26:47.680 ] 00:26:47.680 }, 00:26:47.680 { 00:26:47.680 "name": "nvmf_tgt_poll_group_001", 00:26:47.680 "admin_qpairs": 0, 00:26:47.680 "io_qpairs": 2, 00:26:47.680 "current_admin_qpairs": 0, 00:26:47.680 "current_io_qpairs": 2, 00:26:47.680 "pending_bdev_io": 0, 00:26:47.680 "completed_nvme_io": 27577, 00:26:47.680 "transports": [ 00:26:47.680 { 00:26:47.680 "trtype": "TCP" 00:26:47.680 } 00:26:47.680 ] 00:26:47.680 }, 00:26:47.680 { 00:26:47.680 "name": "nvmf_tgt_poll_group_002", 00:26:47.680 "admin_qpairs": 0, 00:26:47.680 "io_qpairs": 0, 00:26:47.680 "current_admin_qpairs": 0, 00:26:47.680 "current_io_qpairs": 0, 00:26:47.680 "pending_bdev_io": 0, 00:26:47.680 "completed_nvme_io": 0, 00:26:47.680 "transports": [ 00:26:47.680 { 00:26:47.680 "trtype": "TCP" 00:26:47.680 } 00:26:47.680 ] 00:26:47.680 }, 00:26:47.680 { 00:26:47.680 "name": "nvmf_tgt_poll_group_003", 00:26:47.680 "admin_qpairs": 0, 00:26:47.680 "io_qpairs": 0, 00:26:47.680 "current_admin_qpairs": 0, 00:26:47.680 "current_io_qpairs": 0, 00:26:47.680 "pending_bdev_io": 0, 00:26:47.680 "completed_nvme_io": 0, 00:26:47.680 "transports": [ 00:26:47.680 { 00:26:47.680 "trtype": "TCP" 00:26:47.680 } 00:26:47.680 ] 00:26:47.680 } 00:26:47.680 ] 00:26:47.680 }' 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:47.680 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:47.681 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:47.681 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:47.681 15:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1184598 00:26:55.804 Initializing NVMe Controllers 00:26:55.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:55.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:55.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:55.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:55.804 Initialization complete. Launching workers. 00:26:55.804 ======================================================== 00:26:55.804 Latency(us) 00:26:55.804 Device Information : IOPS MiB/s Average min max 00:26:55.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5770.98 22.54 11092.51 1679.07 57450.42 00:26:55.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6465.56 25.26 9900.81 1823.76 54597.31 00:26:55.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6515.26 25.45 9828.26 2334.83 54346.98 00:26:55.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7528.94 29.41 8499.82 1731.76 54757.13 00:26:55.804 ======================================================== 00:26:55.804 Total : 26280.75 102.66 9743.15 1679.07 57450.42 00:26:55.804 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:55.804 rmmod nvme_tcp 00:26:55.804 rmmod nvme_fabrics 00:26:55.804 rmmod nvme_keyring 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1184449 ']' 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1184449 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1184449 ']' 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1184449 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184449 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184449' 00:26:55.804 killing process with pid 1184449 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1184449 00:26:55.804 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1184449 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.063 15:38:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.598 15:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.598 15:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:58.598 00:26:58.598 real 0m43.610s 00:26:58.598 user 2m31.314s 00:26:58.598 sys 0m12.399s 00:26:58.598 15:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.598 15:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.598 ************************************ 00:26:58.598 END TEST nvmf_perf_adq 00:26:58.598 ************************************ 00:26:58.598 15:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:58.598 15:38:28 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:58.598 15:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:58.598 15:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.598 15:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.598 ************************************ 00:26:58.598 START TEST nvmf_shutdown 00:26:58.598 ************************************ 00:26:58.598 15:38:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:58.598 * Looking for test storage... 00:26:58.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.598 15:38:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.598 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:58.598 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.598 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:58.599 ************************************ 00:26:58.599 START TEST nvmf_shutdown_tc1 00:26:58.599 ************************************ 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.599 15:38:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:00.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:00.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:00.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:00.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.505 15:38:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:00.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:27:00.505 00:27:00.505 --- 10.0.0.2 ping statistics --- 00:27:00.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.505 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:27:00.505 00:27:00.505 --- 10.0.0.1 ping statistics --- 00:27:00.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.505 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.505 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1187747 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1187747 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1187747 ']' 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.506 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.506 [2024-07-13 15:38:31.155943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:00.506 [2024-07-13 15:38:31.156019] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.506 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.506 [2024-07-13 15:38:31.194181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:00.506 [2024-07-13 15:38:31.226781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.764 [2024-07-13 15:38:31.322102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.764 [2024-07-13 15:38:31.322174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.764 [2024-07-13 15:38:31.322192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.764 [2024-07-13 15:38:31.322205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.764 [2024-07-13 15:38:31.322217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.764 [2024-07-13 15:38:31.322314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.764 [2024-07-13 15:38:31.322475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.764 [2024-07-13 15:38:31.322543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:00.764 [2024-07-13 15:38:31.322545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.764 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.765 [2024-07-13 15:38:31.479806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.765 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:01.028 Malloc1 00:27:01.028 [2024-07-13 15:38:31.569563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.028 Malloc2 00:27:01.028 Malloc3 00:27:01.028 Malloc4 00:27:01.028 Malloc5 00:27:01.028 Malloc6 00:27:01.286 Malloc7 00:27:01.286 Malloc8 00:27:01.286 Malloc9 00:27:01.286 Malloc10 00:27:01.286 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.286 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:01.286 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.286 15:38:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1187821 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1187821 /var/tmp/bdevperf.sock 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1187821 ']' 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.286 { 00:27:01.286 "params": { 00:27:01.286 "name": "Nvme$subsystem", 00:27:01.286 "trtype": "$TEST_TRANSPORT", 00:27:01.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.286 "adrfam": "ipv4", 00:27:01.286 "trsvcid": "$NVMF_PORT", 00:27:01.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.286 "hdgst": ${hdgst:-false}, 00:27:01.286 "ddgst": ${ddgst:-false} 00:27:01.286 }, 00:27:01.286 "method": "bdev_nvme_attach_controller" 00:27:01.286 } 00:27:01.286 EOF 00:27:01.286 )") 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.286 { 00:27:01.286 "params": { 00:27:01.286 "name": "Nvme$subsystem", 00:27:01.286 "trtype": "$TEST_TRANSPORT", 00:27:01.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.286 "adrfam": "ipv4", 00:27:01.286 "trsvcid": "$NVMF_PORT", 00:27:01.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.286 "hdgst": ${hdgst:-false}, 00:27:01.286 "ddgst": ${ddgst:-false} 00:27:01.286 }, 00:27:01.286 "method": "bdev_nvme_attach_controller" 00:27:01.286 } 00:27:01.286 EOF 00:27:01.286 )") 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.286 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.286 { 00:27:01.286 "params": { 00:27:01.286 "name": "Nvme$subsystem", 00:27:01.286 "trtype": "$TEST_TRANSPORT", 00:27:01.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.286 "adrfam": "ipv4", 00:27:01.286 "trsvcid": "$NVMF_PORT", 00:27:01.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.287 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.287 { 00:27:01.287 "params": { 00:27:01.287 "name": "Nvme$subsystem", 00:27:01.287 "trtype": "$TEST_TRANSPORT", 00:27:01.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.287 "adrfam": "ipv4", 00:27:01.287 "trsvcid": "$NVMF_PORT", 00:27:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.287 "hdgst": ${hdgst:-false}, 00:27:01.287 "ddgst": ${ddgst:-false} 00:27:01.287 }, 00:27:01.287 "method": "bdev_nvme_attach_controller" 00:27:01.287 } 00:27:01.287 EOF 00:27:01.287 )") 00:27:01.545 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:01.545 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:01.545 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:01.545 15:38:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme1", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme2", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme3", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme4", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme5", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme6", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme7", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme8", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme9", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 },{ 00:27:01.545 "params": { 00:27:01.545 "name": "Nvme10", 00:27:01.545 "trtype": "tcp", 00:27:01.545 "traddr": "10.0.0.2", 00:27:01.545 "adrfam": "ipv4", 00:27:01.545 "trsvcid": "4420", 00:27:01.545 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:01.545 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:01.545 "hdgst": false, 00:27:01.545 "ddgst": false 00:27:01.545 }, 00:27:01.545 "method": "bdev_nvme_attach_controller" 00:27:01.545 }' 00:27:01.545 [2024-07-13 15:38:32.062575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:01.545 [2024-07-13 15:38:32.062650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:01.545 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.545 [2024-07-13 15:38:32.099642] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.545 [2024-07-13 15:38:32.129467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.545 [2024-07-13 15:38:32.217770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1187821 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:03.449 15:38:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:04.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1187821 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1187747 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.390 { 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme$subsystem", 00:27:04.390 "trtype": "$TEST_TRANSPORT", 00:27:04.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "$NVMF_PORT", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.390 "hdgst": ${hdgst:-false}, 00:27:04.390 "ddgst": ${ddgst:-false} 00:27:04.390 }, 00:27:04.390 "method": "bdev_nvme_attach_controller" 00:27:04.390 } 00:27:04.390 EOF 00:27:04.390 )") 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:04.390 15:38:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.390 "params": { 00:27:04.390 "name": "Nvme1", 00:27:04.390 "trtype": "tcp", 00:27:04.390 "traddr": "10.0.0.2", 00:27:04.390 "adrfam": "ipv4", 00:27:04.390 "trsvcid": "4420", 00:27:04.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme2", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme3", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme4", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme5", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme6", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme7", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme8", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme9", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 },{ 00:27:04.391 "params": { 00:27:04.391 "name": "Nvme10", 00:27:04.391 "trtype": "tcp", 00:27:04.391 "traddr": "10.0.0.2", 00:27:04.391 "adrfam": "ipv4", 00:27:04.391 "trsvcid": "4420", 00:27:04.391 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:04.391 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:04.391 "hdgst": false, 00:27:04.391 "ddgst": false 00:27:04.391 }, 00:27:04.391 "method": "bdev_nvme_attach_controller" 00:27:04.391 }' 00:27:04.391 [2024-07-13 15:38:35.077336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:04.391 [2024-07-13 15:38:35.077409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188239 ] 00:27:04.391 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.391 [2024-07-13 15:38:35.113829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.391 [2024-07-13 15:38:35.143394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.650 [2024-07-13 15:38:35.234793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.062 Running I/O for 1 seconds... 00:27:07.438 00:27:07.438 Latency(us) 00:27:07.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.438 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme1n1 : 1.14 224.79 14.05 0.00 0.00 281995.19 22233.69 250104.79 00:27:07.438 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme2n1 : 1.10 232.49 14.53 0.00 0.00 267902.48 18932.62 253211.69 00:27:07.438 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme3n1 : 1.14 224.03 14.00 0.00 0.00 271896.46 21651.15 257872.02 00:27:07.438 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme4n1 : 1.07 239.57 14.97 0.00 0.00 250272.05 21748.24 251658.24 00:27:07.438 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme5n1 : 1.17 219.23 13.70 0.00 0.00 270659.89 34369.99 268746.15 00:27:07.438 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme6n1 : 1.13 227.30 14.21 0.00 0.00 255948.99 22233.69 250104.79 00:27:07.438 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme7n1 : 1.18 216.50 13.53 0.00 0.00 265403.73 18544.26 293601.28 00:27:07.438 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme8n1 : 1.19 323.78 20.24 0.00 0.00 173344.68 11893.57 246997.90 00:27:07.438 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme9n1 : 1.18 217.30 13.58 0.00 0.00 255415.94 21651.15 271853.04 00:27:07.438 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:07.438 Verification LBA range: start 0x0 length 0x400 00:27:07.438 Nvme10n1 : 1.20 267.48 16.72 0.00 0.00 204273.59 13981.01 251658.24 00:27:07.438 =================================================================================================================== 00:27:07.438 Total : 2392.47 149.53 0.00 0.00 245102.67 11893.57 293601.28 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.438 rmmod nvme_tcp 00:27:07.438 rmmod nvme_fabrics 00:27:07.438 rmmod nvme_keyring 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1187747 ']' 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1187747 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1187747 ']' 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1187747 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.438 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187747 00:27:07.698 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:07.698 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:07.698 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187747' 00:27:07.698 killing process with pid 1187747 00:27:07.698 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1187747 00:27:07.698 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1187747 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.269 15:38:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.173 00:27:10.173 real 0m11.780s 00:27:10.173 user 0m34.072s 00:27:10.173 sys 0m3.171s 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.173 ************************************ 00:27:10.173 END TEST nvmf_shutdown_tc1 00:27:10.173 ************************************ 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.173 ************************************ 00:27:10.173 START TEST nvmf_shutdown_tc2 00:27:10.173 ************************************ 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.173 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.174 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:27:10.434 00:27:10.434 --- 10.0.0.2 ping statistics --- 00:27:10.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.434 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:27:10.434 00:27:10.434 --- 10.0.0.1 ping statistics --- 00:27:10.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.434 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.434 15:38:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1189000 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1189000 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1189000 ']' 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.434 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.434 [2024-07-13 15:38:41.056178] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:10.434 [2024-07-13 15:38:41.056258] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.434 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.434 [2024-07-13 15:38:41.092542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:10.434 [2024-07-13 15:38:41.117806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.693 [2024-07-13 15:38:41.210074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.693 [2024-07-13 15:38:41.210142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.693 [2024-07-13 15:38:41.210157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.693 [2024-07-13 15:38:41.210195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.693 [2024-07-13 15:38:41.210205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.693 [2024-07-13 15:38:41.211888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.693 [2024-07-13 15:38:41.211969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.693 [2024-07-13 15:38:41.212021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.693 [2024-07-13 15:38:41.212025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.693 [2024-07-13 15:38:41.355522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.693 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.693 Malloc1 00:27:10.693 [2024-07-13 15:38:41.430430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.693 Malloc2 00:27:10.951 Malloc3 00:27:10.951 Malloc4 00:27:10.951 Malloc5 00:27:10.951 Malloc6 00:27:10.951 Malloc7 00:27:11.209 Malloc8 00:27:11.209 Malloc9 00:27:11.209 Malloc10 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1189181 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1189181 /var/tmp/bdevperf.sock 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1189181 ']' 00:27:11.209 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:11.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:11.210 { 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme$subsystem", 00:27:11.210 "trtype": "$TEST_TRANSPORT", 00:27:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "$NVMF_PORT", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.210 "hdgst": ${hdgst:-false}, 00:27:11.210 "ddgst": ${ddgst:-false} 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 } 00:27:11.210 EOF 00:27:11.210 )") 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:11.210 15:38:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme1", 00:27:11.210 "trtype": "tcp", 00:27:11.210 "traddr": "10.0.0.2", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "4420", 00:27:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.210 "hdgst": false, 00:27:11.210 "ddgst": false 00:27:11.210 }, 00:27:11.210 "method": "bdev_nvme_attach_controller" 00:27:11.210 },{ 00:27:11.210 "params": { 00:27:11.210 "name": "Nvme2", 00:27:11.210 "trtype": "tcp", 00:27:11.210 "traddr": "10.0.0.2", 00:27:11.210 "adrfam": "ipv4", 00:27:11.210 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme3", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme4", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme5", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme6", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme7", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme8", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme9", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 },{ 00:27:11.211 "params": { 00:27:11.211 "name": "Nvme10", 00:27:11.211 "trtype": "tcp", 00:27:11.211 "traddr": "10.0.0.2", 00:27:11.211 "adrfam": "ipv4", 00:27:11.211 "trsvcid": "4420", 00:27:11.211 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:11.211 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:11.211 "hdgst": false, 00:27:11.211 "ddgst": false 00:27:11.211 }, 00:27:11.211 "method": "bdev_nvme_attach_controller" 00:27:11.211 }' 00:27:11.211 [2024-07-13 15:38:41.923544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:11.211 [2024-07-13 15:38:41.923620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189181 ] 00:27:11.211 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.211 [2024-07-13 15:38:41.959606] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:11.470 [2024-07-13 15:38:41.989859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.470 [2024-07-13 15:38:42.076392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.371 Running I/O for 10 seconds... 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:13.371 15:38:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:13.630 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1189181 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1189181 ']' 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1189181 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189181 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189181' 00:27:13.888 killing process with pid 1189181 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1189181 00:27:13.888 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1189181 00:27:13.888 Received shutdown signal, test time was about 0.955482 seconds 00:27:13.888 00:27:13.888 Latency(us) 00:27:13.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.888 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme1n1 : 0.95 268.16 16.76 0.00 0.00 235898.12 18252.99 267192.70 00:27:13.888 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme2n1 : 0.93 209.86 13.12 0.00 0.00 294112.86 5849.69 271853.04 00:27:13.888 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme3n1 : 0.95 269.44 16.84 0.00 0.00 225462.61 18641.35 267192.70 00:27:13.888 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme4n1 : 0.90 218.25 13.64 0.00 0.00 269776.59 2014.63 270299.59 00:27:13.888 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme5n1 : 0.92 209.38 13.09 0.00 0.00 277477.20 38641.97 246997.90 00:27:13.888 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme6n1 : 0.94 208.00 13.00 0.00 0.00 272518.18 5728.33 271853.04 00:27:13.888 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme7n1 : 0.91 210.42 13.15 0.00 0.00 264003.63 22330.79 262532.36 00:27:13.888 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme8n1 : 0.92 208.08 13.01 0.00 0.00 261279.54 19709.35 270299.59 00:27:13.888 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme9n1 : 0.94 204.11 12.76 0.00 0.00 261718.47 23010.42 284280.60 00:27:13.888 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.888 Verification LBA range: start 0x0 length 0x400 00:27:13.888 Nvme10n1 : 0.95 202.58 12.66 0.00 0.00 258177.20 23204.60 310689.19 00:27:13.888 =================================================================================================================== 00:27:13.888 Total : 2208.29 138.02 0.00 0.00 260196.01 2014.63 310689.19 00:27:14.147 15:38:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1189000 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.525 rmmod nvme_tcp 00:27:15.525 rmmod nvme_fabrics 00:27:15.525 rmmod nvme_keyring 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1189000 ']' 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1189000 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1189000 ']' 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1189000 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189000 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189000' 00:27:15.525 killing process with pid 1189000 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1189000 00:27:15.525 15:38:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1189000 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.783 15:38:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.315 00:27:18.315 real 0m7.676s 00:27:18.315 user 0m23.152s 00:27:18.315 sys 0m1.553s 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.315 ************************************ 00:27:18.315 END TEST nvmf_shutdown_tc2 00:27:18.315 ************************************ 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:18.315 ************************************ 00:27:18.315 START TEST nvmf_shutdown_tc3 00:27:18.315 ************************************ 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:18.315 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:18.316 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:18.316 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:18.316 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:18.316 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:18.316 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:18.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:27:18.317 00:27:18.317 --- 10.0.0.2 ping statistics --- 00:27:18.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.317 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:27:18.317 00:27:18.317 --- 10.0.0.1 ping statistics --- 00:27:18.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.317 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1190091 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1190091 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1190091 ']' 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:18.317 15:38:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.317 [2024-07-13 15:38:48.788167] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:18.317 [2024-07-13 15:38:48.788261] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.317 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.317 [2024-07-13 15:38:48.826295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:18.317 [2024-07-13 15:38:48.858579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.317 [2024-07-13 15:38:48.949813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.317 [2024-07-13 15:38:48.949889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.317 [2024-07-13 15:38:48.949915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.317 [2024-07-13 15:38:48.949929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.317 [2024-07-13 15:38:48.949941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.317 [2024-07-13 15:38:48.950024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.317 [2024-07-13 15:38:48.950140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.317 [2024-07-13 15:38:48.950204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.317 [2024-07-13 15:38:48.950206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.317 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.317 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:18.317 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.317 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.317 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.575 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.575 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.575 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.575 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.575 [2024-07-13 15:38:49.104820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.575 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.575 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.576 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.576 Malloc1 00:27:18.576 [2024-07-13 15:38:49.184596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.576 Malloc2 00:27:18.576 Malloc3 00:27:18.576 Malloc4 00:27:18.833 Malloc5 00:27:18.833 Malloc6 00:27:18.833 Malloc7 00:27:18.833 Malloc8 00:27:18.833 Malloc9 00:27:18.833 Malloc10 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1190270 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1190270 /var/tmp/bdevperf.sock 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1190270 ']' 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:19.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.091 { 00:27:19.091 "params": { 00:27:19.091 "name": "Nvme$subsystem", 00:27:19.091 "trtype": "$TEST_TRANSPORT", 00:27:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.091 "adrfam": "ipv4", 00:27:19.091 "trsvcid": "$NVMF_PORT", 00:27:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.091 "hdgst": ${hdgst:-false}, 00:27:19.091 "ddgst": ${ddgst:-false} 00:27:19.091 }, 00:27:19.091 "method": "bdev_nvme_attach_controller" 00:27:19.091 } 00:27:19.091 EOF 00:27:19.091 )") 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.091 { 00:27:19.091 "params": { 00:27:19.091 "name": "Nvme$subsystem", 00:27:19.091 "trtype": "$TEST_TRANSPORT", 00:27:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.091 "adrfam": "ipv4", 00:27:19.091 "trsvcid": "$NVMF_PORT", 00:27:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.091 "hdgst": ${hdgst:-false}, 00:27:19.091 "ddgst": ${ddgst:-false} 00:27:19.091 }, 00:27:19.091 "method": "bdev_nvme_attach_controller" 00:27:19.091 } 00:27:19.091 EOF 00:27:19.091 )") 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.091 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.091 { 00:27:19.091 "params": { 00:27:19.091 "name": "Nvme$subsystem", 00:27:19.091 "trtype": "$TEST_TRANSPORT", 00:27:19.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.091 "adrfam": "ipv4", 00:27:19.091 "trsvcid": "$NVMF_PORT", 00:27:19.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.091 "hdgst": ${hdgst:-false}, 00:27:19.091 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.092 { 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme$subsystem", 00:27:19.092 "trtype": "$TEST_TRANSPORT", 00:27:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "$NVMF_PORT", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.092 "hdgst": ${hdgst:-false}, 00:27:19.092 "ddgst": ${ddgst:-false} 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 } 00:27:19.092 EOF 00:27:19.092 )") 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:19.092 15:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme1", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme2", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme3", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme4", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme5", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme6", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme7", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme8", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme9", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 },{ 00:27:19.092 "params": { 00:27:19.092 "name": "Nvme10", 00:27:19.092 "trtype": "tcp", 00:27:19.092 "traddr": "10.0.0.2", 00:27:19.092 "adrfam": "ipv4", 00:27:19.092 "trsvcid": "4420", 00:27:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:19.092 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:19.092 "hdgst": false, 00:27:19.092 "ddgst": false 00:27:19.092 }, 00:27:19.092 "method": "bdev_nvme_attach_controller" 00:27:19.092 }' 00:27:19.092 [2024-07-13 15:38:49.673427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:19.092 [2024-07-13 15:38:49.673504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190270 ] 00:27:19.092 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.092 [2024-07-13 15:38:49.709792] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:19.092 [2024-07-13 15:38:49.739252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.092 [2024-07-13 15:38:49.825936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.992 Running I/O for 10 seconds... 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:20.992 15:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:21.249 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:21.249 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:21.249 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:21.249 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.249 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:21.249 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.505 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.505 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:21.505 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:21.505 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=137 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 137 -ge 100 ']' 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1190091 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1190091 ']' 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1190091 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1190091 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:21.775 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1190091' 00:27:21.775 killing process with pid 1190091 00:27:21.776 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1190091 00:27:21.776 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1190091 00:27:21.776 [2024-07-13 15:38:52.369786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.369952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.369970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.369984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.369996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.370611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140aaa0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.371778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d4a0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.371810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d4a0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.371824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d4a0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.371836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d4a0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.776 [2024-07-13 15:38:52.374353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.374989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.375001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.375014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.375026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.375038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.375050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b3e0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.777 [2024-07-13 15:38:52.376396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.376989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140b8a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd40 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.377817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140bd40 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.778 [2024-07-13 15:38:52.379511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.778 [2024-07-13 15:38:52.379516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.778 [2024-07-13 15:38:52.379556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.778 [2024-07-13 15:38:52.379569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.778 [2024-07-13 15:38:52.379587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.778 [2024-07-13 15:38:52.379600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.778 [2024-07-13 15:38:52.379611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.778 [2024-07-13 15:38:52.379613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.379626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with [2024-07-13 15:38:52.379705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:1the state(5) to be set 00:27:21.779 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with [2024-07-13 15:38:52.379720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:21.779 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with [2024-07-13 15:38:52.379827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1the state(5) to be set 00:27:21.779 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with [2024-07-13 15:38:52.379927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:21.779 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.379979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.379990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.379996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1[2024-07-13 15:38:52.380009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.380023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.380051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.380063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 [2024-07-13 15:38:52.380076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 [2024-07-13 15:38:52.380089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1[2024-07-13 15:38:52.380102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.779 the state(5) to be set 00:27:21.779 [2024-07-13 15:38:52.380116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.380117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.779 the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1[2024-07-13 15:38:52.380322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with [2024-07-13 15:38:52.380336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:21.780 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c6a0 is same with the state(5) to be set 00:27:21.780 [2024-07-13 15:38:52.380400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.780 [2024-07-13 15:38:52.380807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.780 [2024-07-13 15:38:52.380821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.380836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.380850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.380889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.380904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.380929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.380945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.380958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.380974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.380988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.381426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12[2024-07-13 15:38:52.381453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:12[2024-07-13 15:38:52.381487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.381503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.781 [2024-07-13 15:38:52.381530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.781 [2024-07-13 15:38:52.381542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.781 [2024-07-13 15:38:52.381579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.782 [2024-07-13 15:38:52.381592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381672] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e9380 was disconnected and fr[2024-07-13 15:38:52.381676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with eed. reset controller. 00:27:21.782 the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.381995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:12[2024-07-13 15:38:52.382244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cb40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 the state(5) to be set 00:27:21.782 [2024-07-13 15:38:52.382259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.782 [2024-07-13 15:38:52.382505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.782 [2024-07-13 15:38:52.382520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.382968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.382982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.382974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1[2024-07-13 15:38:52.383001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.383016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.383137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with [2024-07-13 15:38:52.383190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:1the state(5) to be set 00:27:21.783 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.783 [2024-07-13 15:38:52.383253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.783 [2024-07-13 15:38:52.383265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.783 [2024-07-13 15:38:52.383269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:1[2024-07-13 15:38:52.383282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with [2024-07-13 15:38:52.383296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:21.784 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:1[2024-07-13 15:38:52.383436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:1[2024-07-13 15:38:52.383466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.383481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1[2024-07-13 15:38:52.383560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with [2024-07-13 15:38:52.383574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:21.784 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.383638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 15:38:52.383704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140cfe0 is same with the state(5) to be set 00:27:21.784 [2024-07-13 15:38:52.383720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.784 [2024-07-13 15:38:52.383837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.784 [2024-07-13 15:38:52.383850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.383879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.383895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.383911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.383944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.383958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.383973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.785 [2024-07-13 15:38:52.384186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:21.785 [2024-07-13 15:38:52.384774] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ee420 was disconnected and freed. reset controller. 00:27:21.785 [2024-07-13 15:38:52.384891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.384913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.384950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.384981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.384995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882140 is same with the state(5) to be set 00:27:21.785 [2024-07-13 15:38:52.385068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb740 is same with the state(5) to be set 00:27:21.785 [2024-07-13 15:38:52.385227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1010 is same with the state(5) to be set 00:27:21.785 [2024-07-13 15:38:52.385390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4610 is same with the state(5) to be set 00:27:21.785 [2024-07-13 15:38:52.385547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.785 [2024-07-13 15:38:52.385648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.785 [2024-07-13 15:38:52.385660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878600 is same with the state(5) to be set 00:27:21.785 [2024-07-13 15:38:52.385707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879e80 is same with the state(5) to be set 00:27:21.786 [2024-07-13 15:38:52.385857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.385979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.385991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879320 is same with the state(5) to be set 00:27:21.786 [2024-07-13 15:38:52.386037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.386057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.386072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.786 [2024-07-13 15:38:52.386085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.786 [2024-07-13 15:38:52.386098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef10 is same with the state(5) to be set 00:27:21.787 [2024-07-13 15:38:52.386202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187aa90 is same with the state(5) to be set 00:27:21.787 [2024-07-13 15:38:52.386357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.787 [2024-07-13 15:38:52.386462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.386474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f7b40 is same with the state(5) to be set 00:27:21.787 [2024-07-13 15:38:52.390602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:21.787 [2024-07-13 15:38:52.390658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4610 (9): Bad file descriptor 00:27:21.787 [2024-07-13 15:38:52.390937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.390963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.390986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.787 [2024-07-13 15:38:52.391338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.787 [2024-07-13 15:38:52.391353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.391981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.391997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.788 [2024-07-13 15:38:52.392348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.788 [2024-07-13 15:38:52.392364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.392898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.392999] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e7350 was disconnected and freed. reset controller. 00:27:21.789 [2024-07-13 15:38:52.393171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.789 [2024-07-13 15:38:52.393643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.789 [2024-07-13 15:38:52.393656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.393986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.393999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.790 [2024-07-13 15:38:52.394659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.790 [2024-07-13 15:38:52.394675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.394986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.394999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.395028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.395043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.395056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.395071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.395084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.395100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.395118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.395217] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16aaab0 was disconnected and freed. reset controller. 00:27:21.791 [2024-07-13 15:38:52.395481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:21.791 [2024-07-13 15:38:52.395532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879e80 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882140 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16eb740 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1010 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878600 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879320 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef10 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187aa90 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.395818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f7b40 (9): Bad file descriptor 00:27:21.791 [2024-07-13 15:38:52.399265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:21.791 [2024-07-13 15:38:52.399304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:21.791 [2024-07-13 15:38:52.399523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.791 [2024-07-13 15:38:52.399554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4610 with addr=10.0.0.2, port=4420 00:27:21.791 [2024-07-13 15:38:52.399572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4610 is same with the state(5) to be set 00:27:21.791 [2024-07-13 15:38:52.399630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.399997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.791 [2024-07-13 15:38:52.400011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.791 [2024-07-13 15:38:52.400027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.792 [2024-07-13 15:38:52.400794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.792 [2024-07-13 15:38:52.400808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.400823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.400837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.400852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.400871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.400903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.400920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.400934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.400949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.400962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.400978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.400991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.793 [2024-07-13 15:38:52.401561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.793 [2024-07-13 15:38:52.401677] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1768940 was disconnected and freed. reset controller. 00:27:21.793 [2024-07-13 15:38:52.401931] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:21.793 [2024-07-13 15:38:52.402236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.793 [2024-07-13 15:38:52.402264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1879e80 with addr=10.0.0.2, port=4420 00:27:21.793 [2024-07-13 15:38:52.402281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879e80 is same with the state(5) to be set 00:27:21.793 [2024-07-13 15:38:52.402406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.793 [2024-07-13 15:38:52.402431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187aa90 with addr=10.0.0.2, port=4420 00:27:21.793 [2024-07-13 15:38:52.402446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187aa90 is same with the state(5) to be set 00:27:21.793 [2024-07-13 15:38:52.402590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.793 [2024-07-13 15:38:52.402615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f1010 with addr=10.0.0.2, port=4420 00:27:21.793 [2024-07-13 15:38:52.402629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1010 is same with the state(5) to be set 00:27:21.793 [2024-07-13 15:38:52.402652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4610 (9): Bad file descriptor 00:27:21.793 [2024-07-13 15:38:52.404135] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:21.793 [2024-07-13 15:38:52.404221] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:21.793 [2024-07-13 15:38:52.404575] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:21.793 [2024-07-13 15:38:52.404647] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:21.793 [2024-07-13 15:38:52.404684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:21.793 [2024-07-13 15:38:52.404725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879e80 (9): Bad file descriptor 00:27:21.793 [2024-07-13 15:38:52.404749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187aa90 (9): Bad file descriptor 00:27:21.793 [2024-07-13 15:38:52.404766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1010 (9): Bad file descriptor 00:27:21.793 [2024-07-13 15:38:52.404783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:21.794 [2024-07-13 15:38:52.404802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:21.794 [2024-07-13 15:38:52.404819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:21.794 [2024-07-13 15:38:52.404961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.794 [2024-07-13 15:38:52.405145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.794 [2024-07-13 15:38:52.405173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aef10 with addr=10.0.0.2, port=4420 00:27:21.794 [2024-07-13 15:38:52.405192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef10 is same with the state(5) to be set 00:27:21.794 [2024-07-13 15:38:52.405206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:21.794 [2024-07-13 15:38:52.405218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:21.794 [2024-07-13 15:38:52.405231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:21.794 [2024-07-13 15:38:52.405251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:21.794 [2024-07-13 15:38:52.405265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:21.794 [2024-07-13 15:38:52.405277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:21.794 [2024-07-13 15:38:52.405295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:21.794 [2024-07-13 15:38:52.405309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:21.794 [2024-07-13 15:38:52.405321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:21.794 [2024-07-13 15:38:52.405638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.794 [2024-07-13 15:38:52.405659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.794 [2024-07-13 15:38:52.405672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.794 [2024-07-13 15:38:52.405688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef10 (9): Bad file descriptor 00:27:21.794 [2024-07-13 15:38:52.405812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:21.794 [2024-07-13 15:38:52.405834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:21.794 [2024-07-13 15:38:52.405848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:21.794 [2024-07-13 15:38:52.405931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.405953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.405977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.405993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.794 [2024-07-13 15:38:52.406687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.794 [2024-07-13 15:38:52.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.406982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.406996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.795 [2024-07-13 15:38:52.407757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.795 [2024-07-13 15:38:52.407771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.407786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.407800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.407816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.407830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.407845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.407859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.407881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e87c0 is same with the state(5) to be set 00:27:21.796 [2024-07-13 15:38:52.409156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.409984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.409998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.410013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.410027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.410041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.410055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.410069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.796 [2024-07-13 15:38:52.410083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.796 [2024-07-13 15:38:52.410097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.797 [2024-07-13 15:38:52.410894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.797 [2024-07-13 15:38:52.410918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.410933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.410947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.410962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.410976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.410991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.411004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.411020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.411034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.411049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.411063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.411077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a9640 is same with the state(5) to be set 00:27:21.798 [2024-07-13 15:38:52.412329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.412973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.412986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.798 [2024-07-13 15:38:52.413210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.798 [2024-07-13 15:38:52.413224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.413982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.413997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.414011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.414026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.414039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.414054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.414068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.414083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.414096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.414111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.414125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.414140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.799 [2024-07-13 15:38:52.414153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.799 [2024-07-13 15:38:52.414168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.414181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.414196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.414214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.414229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.414243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.414257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ea7d0 is same with the state(5) to be set 00:27:21.800 [2024-07-13 15:38:52.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.415983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.415997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.800 [2024-07-13 15:38:52.416374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.800 [2024-07-13 15:38:52.416389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.416979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.416993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.801 [2024-07-13 15:38:52.417371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.801 [2024-07-13 15:38:52.417384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ebc80 is same with the state(5) to be set 00:27:21.801 [2024-07-13 15:38:52.418621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.418973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.418987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.802 [2024-07-13 15:38:52.419469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.802 [2024-07-13 15:38:52.419483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.419981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.419995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.803 [2024-07-13 15:38:52.420322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:21.803 [2024-07-13 15:38:52.420336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ecf80 is same with the state(5) to be set 00:27:21.803 [2024-07-13 15:38:52.421936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:21.803 [2024-07-13 15:38:52.421970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.803 [2024-07-13 15:38:52.421989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:21.803 [2024-07-13 15:38:52.422005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:21.803 [2024-07-13 15:38:52.422027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:21.803 [2024-07-13 15:38:52.422158] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.803 [2024-07-13 15:38:52.422184] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.803 [2024-07-13 15:38:52.422269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:21.803 task offset: 21248 on job bdev=Nvme6n1 fails 00:27:21.803 00:27:21.803 Latency(us) 00:27:21.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.803 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.803 Job: Nvme1n1 ended in about 0.90 seconds with error 00:27:21.803 Verification LBA range: start 0x0 length 0x400 00:27:21.803 Nvme1n1 : 0.90 213.89 13.37 71.30 0.00 221815.94 6893.42 253211.69 00:27:21.803 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.803 Job: Nvme2n1 ended in about 0.89 seconds with error 00:27:21.803 Verification LBA range: start 0x0 length 0x400 00:27:21.803 Nvme2n1 : 0.89 143.64 8.98 71.82 0.00 287569.29 9369.22 295154.73 00:27:21.803 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.803 Job: Nvme3n1 ended in about 0.90 seconds with error 00:27:21.803 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme3n1 : 0.90 141.77 8.86 70.89 0.00 285435.07 42331.40 259425.47 00:27:21.804 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme4n1 ended in about 0.91 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme4n1 : 0.91 211.91 13.24 70.64 0.00 210187.38 13883.92 222142.77 00:27:21.804 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme5n1 ended in about 0.89 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme5n1 : 0.89 215.17 13.45 71.72 0.00 202193.64 9417.77 254765.13 00:27:21.804 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme6n1 ended in about 0.88 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme6n1 : 0.88 144.93 9.06 72.47 0.00 260672.35 9077.95 287387.50 00:27:21.804 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme7n1 ended in about 0.91 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme7n1 : 0.91 140.78 8.80 70.39 0.00 263462.81 24272.59 281173.71 00:27:21.804 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme8n1 ended in about 0.91 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme8n1 : 0.91 140.30 8.77 70.15 0.00 258611.83 22330.79 281173.71 00:27:21.804 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme9n1 ended in about 0.92 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme9n1 : 0.92 147.50 9.22 62.28 0.00 253103.47 28544.57 233016.89 00:27:21.804 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.804 Job: Nvme10n1 ended in about 0.88 seconds with error 00:27:21.804 Verification LBA range: start 0x0 length 0x400 00:27:21.804 Nvme10n1 : 0.88 144.72 9.05 72.36 0.00 237717.18 29515.47 293601.28 00:27:21.804 =================================================================================================================== 00:27:21.804 Total : 1644.64 102.79 704.02 0.00 244742.54 6893.42 295154.73 00:27:21.804 [2024-07-13 15:38:52.449603] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:21.804 [2024-07-13 15:38:52.449714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:21.804 [2024-07-13 15:38:52.450106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.450141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a4610 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.450163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4610 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.450303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.450329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1882140 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.450344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882140 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.450485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.450512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16eb740 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.450529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eb740 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.450657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.450682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f7b40 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.450697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f7b40 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.452071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:21.804 [2024-07-13 15:38:52.452102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:21.804 [2024-07-13 15:38:52.452121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:21.804 [2024-07-13 15:38:52.452138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:21.804 [2024-07-13 15:38:52.452357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.452385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1879320 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.452402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879320 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.452527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.452552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1878600 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.452567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878600 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.452595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4610 (9): Bad file descriptor 00:27:21.804 [2024-07-13 15:38:52.452619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882140 (9): Bad file descriptor 00:27:21.804 [2024-07-13 15:38:52.452637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16eb740 (9): Bad file descriptor 00:27:21.804 [2024-07-13 15:38:52.452654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f7b40 (9): Bad file descriptor 00:27:21.804 [2024-07-13 15:38:52.452710] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.804 [2024-07-13 15:38:52.452733] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.804 [2024-07-13 15:38:52.452751] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.804 [2024-07-13 15:38:52.452775] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.804 [2024-07-13 15:38:52.453128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.453156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f1010 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.453172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1010 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.453298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.453323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187aa90 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.453338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187aa90 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.453573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.453597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1879e80 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.453612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879e80 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.453750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.804 [2024-07-13 15:38:52.453775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aef10 with addr=10.0.0.2, port=4420 00:27:21.804 [2024-07-13 15:38:52.453790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef10 is same with the state(5) to be set 00:27:21.804 [2024-07-13 15:38:52.453808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879320 (9): Bad file descriptor 00:27:21.804 [2024-07-13 15:38:52.453826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878600 (9): Bad file descriptor 00:27:21.804 [2024-07-13 15:38:52.453843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:21.804 [2024-07-13 15:38:52.453856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:21.804 [2024-07-13 15:38:52.453914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:21.804 [2024-07-13 15:38:52.453938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:21.804 [2024-07-13 15:38:52.453952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:21.804 [2024-07-13 15:38:52.453965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:21.804 [2024-07-13 15:38:52.453982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.453996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1010 (9): Bad file descriptor 00:27:21.805 [2024-07-13 15:38:52.454226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187aa90 (9): Bad file descriptor 00:27:21.805 [2024-07-13 15:38:52.454244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879e80 (9): Bad file descriptor 00:27:21.805 [2024-07-13 15:38:52.454261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef10 (9): Bad file descriptor 00:27:21.805 [2024-07-13 15:38:52.454275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:21.805 [2024-07-13 15:38:52.454542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:21.805 [2024-07-13 15:38:52.454555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:21.805 [2024-07-13 15:38:52.454595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.805 [2024-07-13 15:38:52.454635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.398 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:22.398 15:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1190270 00:27:23.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1190270) - No such process 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.332 rmmod nvme_tcp 00:27:23.332 rmmod nvme_fabrics 00:27:23.332 rmmod nvme_keyring 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.332 15:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.877 00:27:25.877 real 0m7.477s 00:27:25.877 user 0m17.970s 00:27:25.877 sys 0m1.512s 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.877 ************************************ 00:27:25.877 END TEST nvmf_shutdown_tc3 00:27:25.877 ************************************ 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:25.877 00:27:25.877 real 0m27.148s 00:27:25.877 user 1m15.278s 00:27:25.877 sys 0m6.381s 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.877 15:38:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:25.877 ************************************ 00:27:25.877 END TEST nvmf_shutdown 00:27:25.877 ************************************ 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:25.877 15:38:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.877 15:38:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.877 15:38:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:25.877 15:38:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.877 15:38:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.877 ************************************ 00:27:25.877 START TEST nvmf_multicontroller 00:27:25.877 ************************************ 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:25.877 * Looking for test storage... 00:27:25.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:25.877 15:38:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:27.784 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:27.784 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.784 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:27.785 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:27.785 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:27:27.785 00:27:27.785 --- 10.0.0.2 ping statistics --- 00:27:27.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.785 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:27.785 00:27:27.785 --- 10.0.0.1 ping statistics --- 00:27:27.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.785 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1192784 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1192784 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1192784 ']' 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.785 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.785 [2024-07-13 15:38:58.392558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:27.785 [2024-07-13 15:38:58.392642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.785 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.785 [2024-07-13 15:38:58.430085] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:27.785 [2024-07-13 15:38:58.464339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:28.044 [2024-07-13 15:38:58.555094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.045 [2024-07-13 15:38:58.555167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.045 [2024-07-13 15:38:58.555184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.045 [2024-07-13 15:38:58.555198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.045 [2024-07-13 15:38:58.555209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.045 [2024-07-13 15:38:58.555293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.045 [2024-07-13 15:38:58.555409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.045 [2024-07-13 15:38:58.555411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 [2024-07-13 15:38:58.705206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 Malloc0 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 [2024-07-13 15:38:58.768229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 [2024-07-13 15:38:58.776080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 Malloc1 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.045 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1192811 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1192811 /var/tmp/bdevperf.sock 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1192811 ']' 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:28.304 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.305 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:28.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:28.305 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.305 15:38:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.564 NVMe0n1 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.564 1 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.564 request: 00:27:28.564 { 00:27:28.564 "name": "NVMe0", 00:27:28.564 "trtype": "tcp", 00:27:28.564 "traddr": "10.0.0.2", 00:27:28.564 "adrfam": "ipv4", 00:27:28.564 "trsvcid": "4420", 00:27:28.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.564 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:28.564 "hostaddr": "10.0.0.2", 00:27:28.564 "hostsvcid": "60000", 00:27:28.564 "prchk_reftag": false, 00:27:28.564 "prchk_guard": false, 00:27:28.564 "hdgst": false, 00:27:28.564 "ddgst": false, 00:27:28.564 "method": "bdev_nvme_attach_controller", 00:27:28.564 "req_id": 1 00:27:28.564 } 00:27:28.564 Got JSON-RPC error response 00:27:28.564 response: 00:27:28.564 { 00:27:28.564 "code": -114, 00:27:28.564 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:28.564 } 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.564 request: 00:27:28.564 { 00:27:28.564 "name": "NVMe0", 00:27:28.564 "trtype": "tcp", 00:27:28.564 "traddr": "10.0.0.2", 00:27:28.564 "adrfam": "ipv4", 00:27:28.564 "trsvcid": "4420", 00:27:28.564 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:28.564 "hostaddr": "10.0.0.2", 00:27:28.564 "hostsvcid": "60000", 00:27:28.564 "prchk_reftag": false, 00:27:28.564 "prchk_guard": false, 00:27:28.564 "hdgst": false, 00:27:28.564 "ddgst": false, 00:27:28.564 "method": "bdev_nvme_attach_controller", 00:27:28.564 "req_id": 1 00:27:28.564 } 00:27:28.564 Got JSON-RPC error response 00:27:28.564 response: 00:27:28.564 { 00:27:28.564 "code": -114, 00:27:28.564 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:28.564 } 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.564 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.565 request: 00:27:28.565 { 00:27:28.565 "name": "NVMe0", 00:27:28.565 "trtype": "tcp", 00:27:28.565 "traddr": "10.0.0.2", 00:27:28.565 "adrfam": "ipv4", 00:27:28.565 "trsvcid": "4420", 00:27:28.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.565 "hostaddr": "10.0.0.2", 00:27:28.565 "hostsvcid": "60000", 00:27:28.565 "prchk_reftag": false, 00:27:28.565 "prchk_guard": false, 00:27:28.565 "hdgst": false, 00:27:28.565 "ddgst": false, 00:27:28.565 "multipath": "disable", 00:27:28.565 "method": "bdev_nvme_attach_controller", 00:27:28.565 "req_id": 1 00:27:28.565 } 00:27:28.565 Got JSON-RPC error response 00:27:28.565 response: 00:27:28.565 { 00:27:28.565 "code": -114, 00:27:28.565 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:28.565 } 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.565 request: 00:27:28.565 { 00:27:28.565 "name": "NVMe0", 00:27:28.565 "trtype": "tcp", 00:27:28.565 "traddr": "10.0.0.2", 00:27:28.565 "adrfam": "ipv4", 00:27:28.565 "trsvcid": "4420", 00:27:28.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.565 "hostaddr": "10.0.0.2", 00:27:28.565 "hostsvcid": "60000", 00:27:28.565 "prchk_reftag": false, 00:27:28.565 "prchk_guard": false, 00:27:28.565 "hdgst": false, 00:27:28.565 "ddgst": false, 00:27:28.565 "multipath": "failover", 00:27:28.565 "method": "bdev_nvme_attach_controller", 00:27:28.565 "req_id": 1 00:27:28.565 } 00:27:28.565 Got JSON-RPC error response 00:27:28.565 response: 00:27:28.565 { 00:27:28.565 "code": -114, 00:27:28.565 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:28.565 } 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.565 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.823 00:27:28.823 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.823 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:28.823 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.823 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:28.823 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.824 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:28.824 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.824 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.084 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:29.084 15:38:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:30.020 0 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1192811 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1192811 ']' 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1192811 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192811 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192811' 00:27:30.020 killing process with pid 1192811 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1192811 00:27:30.020 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1192811 00:27:30.278 15:39:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:30.279 15:39:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:30.279 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:30.279 [2024-07-13 15:38:58.876055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:30.279 [2024-07-13 15:38:58.876166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192811 ] 00:27:30.279 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.279 [2024-07-13 15:38:58.910554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:30.279 [2024-07-13 15:38:58.939260] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.279 [2024-07-13 15:38:59.024759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.279 [2024-07-13 15:38:59.597864] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 85f8cc09-054d-4c0a-956d-aee17b011070 already exists 00:27:30.279 [2024-07-13 15:38:59.597909] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:85f8cc09-054d-4c0a-956d-aee17b011070 alias for bdev NVMe1n1 00:27:30.279 [2024-07-13 15:38:59.597933] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:30.279 Running I/O for 1 seconds... 00:27:30.279 00:27:30.279 Latency(us) 00:27:30.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.279 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:30.279 NVMe0n1 : 1.00 19229.27 75.11 0.00 0.00 6647.58 2111.72 11796.48 00:27:30.279 =================================================================================================================== 00:27:30.279 Total : 19229.27 75.11 0.00 0.00 6647.58 2111.72 11796.48 00:27:30.279 Received shutdown signal, test time was about 1.000000 seconds 00:27:30.279 00:27:30.279 Latency(us) 00:27:30.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.279 =================================================================================================================== 00:27:30.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.279 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.279 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.279 rmmod nvme_tcp 00:27:30.279 rmmod nvme_fabrics 00:27:30.279 rmmod nvme_keyring 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1192784 ']' 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1192784 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1192784 ']' 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1192784 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192784 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192784' 00:27:30.537 killing process with pid 1192784 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1192784 00:27:30.537 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1192784 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.796 15:39:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.706 15:39:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.706 00:27:32.706 real 0m7.246s 00:27:32.706 user 0m11.217s 00:27:32.706 sys 0m2.203s 00:27:32.706 15:39:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.706 15:39:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:32.706 ************************************ 00:27:32.706 END TEST nvmf_multicontroller 00:27:32.706 ************************************ 00:27:32.706 15:39:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:32.706 15:39:03 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:32.706 15:39:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:32.706 15:39:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.706 15:39:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:32.706 ************************************ 00:27:32.706 START TEST nvmf_aer 00:27:32.706 ************************************ 00:27:32.706 15:39:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:32.965 * Looking for test storage... 00:27:32.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.965 15:39:03 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.966 15:39:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.866 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:34.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:34.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:34.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:34.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:27:34.867 00:27:34.867 --- 10.0.0.2 ping statistics --- 00:27:34.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.867 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:27:34.867 00:27:34.867 --- 10.0.0.1 ping statistics --- 00:27:34.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.867 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1195125 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1195125 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1195125 ']' 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.867 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:34.867 [2024-07-13 15:39:05.602792] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:34.867 [2024-07-13 15:39:05.602904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.126 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.126 [2024-07-13 15:39:05.642819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:35.126 [2024-07-13 15:39:05.674989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:35.126 [2024-07-13 15:39:05.766819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.126 [2024-07-13 15:39:05.766897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.126 [2024-07-13 15:39:05.766923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.126 [2024-07-13 15:39:05.766944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.126 [2024-07-13 15:39:05.766962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.126 [2024-07-13 15:39:05.767027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.126 [2024-07-13 15:39:05.767065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.126 [2024-07-13 15:39:05.767189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.126 [2024-07-13 15:39:05.767197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.126 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.126 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:35.126 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.126 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.126 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 [2024-07-13 15:39:05.921559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 Malloc0 00:27:35.383 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.384 [2024-07-13 15:39:05.973307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.384 [ 00:27:35.384 { 00:27:35.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:35.384 "subtype": "Discovery", 00:27:35.384 "listen_addresses": [], 00:27:35.384 "allow_any_host": true, 00:27:35.384 "hosts": [] 00:27:35.384 }, 00:27:35.384 { 00:27:35.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.384 "subtype": "NVMe", 00:27:35.384 "listen_addresses": [ 00:27:35.384 { 00:27:35.384 "trtype": "TCP", 00:27:35.384 "adrfam": "IPv4", 00:27:35.384 "traddr": "10.0.0.2", 00:27:35.384 "trsvcid": "4420" 00:27:35.384 } 00:27:35.384 ], 00:27:35.384 "allow_any_host": true, 00:27:35.384 "hosts": [], 00:27:35.384 "serial_number": "SPDK00000000000001", 00:27:35.384 "model_number": "SPDK bdev Controller", 00:27:35.384 "max_namespaces": 2, 00:27:35.384 "min_cntlid": 1, 00:27:35.384 "max_cntlid": 65519, 00:27:35.384 "namespaces": [ 00:27:35.384 { 00:27:35.384 "nsid": 1, 00:27:35.384 "bdev_name": "Malloc0", 00:27:35.384 "name": "Malloc0", 00:27:35.384 "nguid": "9DA3B83477FD44648011F5077EAFFDC8", 00:27:35.384 "uuid": "9da3b834-77fd-4464-8011-f5077eaffdc8" 00:27:35.384 } 00:27:35.384 ] 00:27:35.384 } 00:27:35.384 ] 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1195152 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:35.384 15:39:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:35.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.384 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:35.384 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:35.384 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:35.384 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.643 Malloc1 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.643 [ 00:27:35.643 { 00:27:35.643 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:35.643 "subtype": "Discovery", 00:27:35.643 "listen_addresses": [], 00:27:35.643 "allow_any_host": true, 00:27:35.643 "hosts": [] 00:27:35.643 }, 00:27:35.643 { 00:27:35.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.643 "subtype": "NVMe", 00:27:35.643 "listen_addresses": [ 00:27:35.643 { 00:27:35.643 "trtype": "TCP", 00:27:35.643 "adrfam": "IPv4", 00:27:35.643 "traddr": "10.0.0.2", 00:27:35.643 "trsvcid": "4420" 00:27:35.643 } 00:27:35.643 ], 00:27:35.643 "allow_any_host": true, 00:27:35.643 "hosts": [], 00:27:35.643 "serial_number": "SPDK00000000000001", 00:27:35.643 "model_number": "SPDK bdev Controller", 00:27:35.643 "max_namespaces": 2, 00:27:35.643 "min_cntlid": 1, 00:27:35.643 "max_cntlid": 65519, 00:27:35.643 "namespaces": [ 00:27:35.643 { 00:27:35.643 "nsid": 1, 00:27:35.643 "bdev_name": "Malloc0", 00:27:35.643 "name": "Malloc0", 00:27:35.643 "nguid": "9DA3B83477FD44648011F5077EAFFDC8", 00:27:35.643 "uuid": "9da3b834-77fd-4464-8011-f5077eaffdc8" 00:27:35.643 }, 00:27:35.643 { 00:27:35.643 "nsid": 2, 00:27:35.643 "bdev_name": "Malloc1", 00:27:35.643 "name": "Malloc1", 00:27:35.643 "nguid": "42C9F9070C314D2BB509AD6F89ED9974", 00:27:35.643 "uuid": "42c9f907-0c31-4d2b-b509-ad6f89ed9974" 00:27:35.643 } 00:27:35.643 ] 00:27:35.643 } 00:27:35.643 ] 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1195152 00:27:35.643 Asynchronous Event Request test 00:27:35.643 Attaching to 10.0.0.2 00:27:35.643 Attached to 10.0.0.2 00:27:35.643 Registering asynchronous event callbacks... 00:27:35.643 Starting namespace attribute notice tests for all controllers... 00:27:35.643 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:35.643 aer_cb - Changed Namespace 00:27:35.643 Cleaning up... 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.643 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.901 rmmod nvme_tcp 00:27:35.901 rmmod nvme_fabrics 00:27:35.901 rmmod nvme_keyring 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1195125 ']' 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1195125 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1195125 ']' 00:27:35.901 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1195125 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195125 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195125' 00:27:35.902 killing process with pid 1195125 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1195125 00:27:35.902 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1195125 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.161 15:39:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.068 15:39:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.068 00:27:38.068 real 0m5.358s 00:27:38.068 user 0m4.495s 00:27:38.068 sys 0m1.863s 00:27:38.068 15:39:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.068 15:39:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:38.068 ************************************ 00:27:38.068 END TEST nvmf_aer 00:27:38.068 ************************************ 00:27:38.068 15:39:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:38.068 15:39:08 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:38.068 15:39:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:38.068 15:39:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.068 15:39:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.326 ************************************ 00:27:38.326 START TEST nvmf_async_init 00:27:38.326 ************************************ 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:38.326 * Looking for test storage... 00:27:38.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.326 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8b80920c1db4495bb7d3a85a5a279bf8 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.327 15:39:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.227 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:27:40.228 00:27:40.228 --- 10.0.0.2 ping statistics --- 00:27:40.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.228 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:27:40.228 00:27:40.228 --- 10.0.0.1 ping statistics --- 00:27:40.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.228 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1197709 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1197709 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1197709 ']' 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:40.228 15:39:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.228 [2024-07-13 15:39:10.911647] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:40.228 [2024-07-13 15:39:10.911716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.228 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.228 [2024-07-13 15:39:10.947698] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:40.228 [2024-07-13 15:39:10.978953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.487 [2024-07-13 15:39:11.070767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.487 [2024-07-13 15:39:11.070831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.487 [2024-07-13 15:39:11.070856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.487 [2024-07-13 15:39:11.070890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.487 [2024-07-13 15:39:11.070911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.487 [2024-07-13 15:39:11.070964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 [2024-07-13 15:39:11.205640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 null0 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8b80920c1db4495bb7d3a85a5a279bf8 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.487 [2024-07-13 15:39:11.245888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.487 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.752 nvme0n1 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.752 [ 00:27:40.752 { 00:27:40.752 "name": "nvme0n1", 00:27:40.752 "aliases": [ 00:27:40.752 "8b80920c-1db4-495b-b7d3-a85a5a279bf8" 00:27:40.752 ], 00:27:40.752 "product_name": "NVMe disk", 00:27:40.752 "block_size": 512, 00:27:40.752 "num_blocks": 2097152, 00:27:40.752 "uuid": "8b80920c-1db4-495b-b7d3-a85a5a279bf8", 00:27:40.752 "assigned_rate_limits": { 00:27:40.752 "rw_ios_per_sec": 0, 00:27:40.752 "rw_mbytes_per_sec": 0, 00:27:40.752 "r_mbytes_per_sec": 0, 00:27:40.752 "w_mbytes_per_sec": 0 00:27:40.752 }, 00:27:40.752 "claimed": false, 00:27:40.752 "zoned": false, 00:27:40.752 "supported_io_types": { 00:27:40.752 "read": true, 00:27:40.752 "write": true, 00:27:40.752 "unmap": false, 00:27:40.752 "flush": true, 00:27:40.752 "reset": true, 00:27:40.752 "nvme_admin": true, 00:27:40.752 "nvme_io": true, 00:27:40.752 "nvme_io_md": false, 00:27:40.752 "write_zeroes": true, 00:27:40.752 "zcopy": false, 00:27:40.752 "get_zone_info": false, 00:27:40.752 "zone_management": false, 00:27:40.752 "zone_append": false, 00:27:40.752 "compare": true, 00:27:40.752 "compare_and_write": true, 00:27:40.752 "abort": true, 00:27:40.752 "seek_hole": false, 00:27:40.752 "seek_data": false, 00:27:40.752 "copy": true, 00:27:40.752 "nvme_iov_md": false 00:27:40.752 }, 00:27:40.752 "memory_domains": [ 00:27:40.752 { 00:27:40.752 "dma_device_id": "system", 00:27:40.752 "dma_device_type": 1 00:27:40.752 } 00:27:40.752 ], 00:27:40.752 "driver_specific": { 00:27:40.752 "nvme": [ 00:27:40.752 { 00:27:40.752 "trid": { 00:27:40.752 "trtype": "TCP", 00:27:40.752 "adrfam": "IPv4", 00:27:40.752 "traddr": "10.0.0.2", 00:27:40.752 "trsvcid": "4420", 00:27:40.752 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:40.752 }, 00:27:40.752 "ctrlr_data": { 00:27:40.752 "cntlid": 1, 00:27:40.752 "vendor_id": "0x8086", 00:27:40.752 "model_number": "SPDK bdev Controller", 00:27:40.752 "serial_number": "00000000000000000000", 00:27:40.752 "firmware_revision": "24.09", 00:27:40.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.752 "oacs": { 00:27:40.752 "security": 0, 00:27:40.752 "format": 0, 00:27:40.752 "firmware": 0, 00:27:40.752 "ns_manage": 0 00:27:40.752 }, 00:27:40.752 "multi_ctrlr": true, 00:27:40.752 "ana_reporting": false 00:27:40.752 }, 00:27:40.752 "vs": { 00:27:40.752 "nvme_version": "1.3" 00:27:40.752 }, 00:27:40.752 "ns_data": { 00:27:40.752 "id": 1, 00:27:40.752 "can_share": true 00:27:40.752 } 00:27:40.752 } 00:27:40.752 ], 00:27:40.752 "mp_policy": "active_passive" 00:27:40.752 } 00:27:40.752 } 00:27:40.752 ] 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.752 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.752 [2024-07-13 15:39:11.495318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:40.752 [2024-07-13 15:39:11.495422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7a0b0 (9): Bad file descriptor 00:27:41.012 [2024-07-13 15:39:11.628023] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:41.012 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 [ 00:27:41.013 { 00:27:41.013 "name": "nvme0n1", 00:27:41.013 "aliases": [ 00:27:41.013 "8b80920c-1db4-495b-b7d3-a85a5a279bf8" 00:27:41.013 ], 00:27:41.013 "product_name": "NVMe disk", 00:27:41.013 "block_size": 512, 00:27:41.013 "num_blocks": 2097152, 00:27:41.013 "uuid": "8b80920c-1db4-495b-b7d3-a85a5a279bf8", 00:27:41.013 "assigned_rate_limits": { 00:27:41.013 "rw_ios_per_sec": 0, 00:27:41.013 "rw_mbytes_per_sec": 0, 00:27:41.013 "r_mbytes_per_sec": 0, 00:27:41.013 "w_mbytes_per_sec": 0 00:27:41.013 }, 00:27:41.013 "claimed": false, 00:27:41.013 "zoned": false, 00:27:41.013 "supported_io_types": { 00:27:41.013 "read": true, 00:27:41.013 "write": true, 00:27:41.013 "unmap": false, 00:27:41.013 "flush": true, 00:27:41.013 "reset": true, 00:27:41.013 "nvme_admin": true, 00:27:41.013 "nvme_io": true, 00:27:41.013 "nvme_io_md": false, 00:27:41.013 "write_zeroes": true, 00:27:41.013 "zcopy": false, 00:27:41.013 "get_zone_info": false, 00:27:41.013 "zone_management": false, 00:27:41.013 "zone_append": false, 00:27:41.013 "compare": true, 00:27:41.013 "compare_and_write": true, 00:27:41.013 "abort": true, 00:27:41.013 "seek_hole": false, 00:27:41.013 "seek_data": false, 00:27:41.013 "copy": true, 00:27:41.013 "nvme_iov_md": false 00:27:41.013 }, 00:27:41.013 "memory_domains": [ 00:27:41.013 { 00:27:41.013 "dma_device_id": "system", 00:27:41.013 "dma_device_type": 1 00:27:41.013 } 00:27:41.013 ], 00:27:41.013 "driver_specific": { 00:27:41.013 "nvme": [ 00:27:41.013 { 00:27:41.013 "trid": { 00:27:41.013 "trtype": "TCP", 00:27:41.013 "adrfam": "IPv4", 00:27:41.013 "traddr": "10.0.0.2", 00:27:41.013 "trsvcid": "4420", 00:27:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:41.013 }, 00:27:41.013 "ctrlr_data": { 00:27:41.013 "cntlid": 2, 00:27:41.013 "vendor_id": "0x8086", 00:27:41.013 "model_number": "SPDK bdev Controller", 00:27:41.013 "serial_number": "00000000000000000000", 00:27:41.013 "firmware_revision": "24.09", 00:27:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.013 "oacs": { 00:27:41.013 "security": 0, 00:27:41.013 "format": 0, 00:27:41.013 "firmware": 0, 00:27:41.013 "ns_manage": 0 00:27:41.013 }, 00:27:41.013 "multi_ctrlr": true, 00:27:41.013 "ana_reporting": false 00:27:41.013 }, 00:27:41.013 "vs": { 00:27:41.013 "nvme_version": "1.3" 00:27:41.013 }, 00:27:41.013 "ns_data": { 00:27:41.013 "id": 1, 00:27:41.013 "can_share": true 00:27:41.013 } 00:27:41.013 } 00:27:41.013 ], 00:27:41.013 "mp_policy": "active_passive" 00:27:41.013 } 00:27:41.013 } 00:27:41.013 ] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.as9zvcQBWF 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.as9zvcQBWF 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 [2024-07-13 15:39:11.675941] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:41.013 [2024-07-13 15:39:11.676081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.as9zvcQBWF 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 [2024-07-13 15:39:11.683960] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.as9zvcQBWF 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 [2024-07-13 15:39:11.691978] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:41.013 [2024-07-13 15:39:11.692048] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:41.013 nvme0n1 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.013 [ 00:27:41.013 { 00:27:41.013 "name": "nvme0n1", 00:27:41.013 "aliases": [ 00:27:41.013 "8b80920c-1db4-495b-b7d3-a85a5a279bf8" 00:27:41.013 ], 00:27:41.013 "product_name": "NVMe disk", 00:27:41.013 "block_size": 512, 00:27:41.013 "num_blocks": 2097152, 00:27:41.013 "uuid": "8b80920c-1db4-495b-b7d3-a85a5a279bf8", 00:27:41.013 "assigned_rate_limits": { 00:27:41.013 "rw_ios_per_sec": 0, 00:27:41.013 "rw_mbytes_per_sec": 0, 00:27:41.013 "r_mbytes_per_sec": 0, 00:27:41.013 "w_mbytes_per_sec": 0 00:27:41.013 }, 00:27:41.013 "claimed": false, 00:27:41.013 "zoned": false, 00:27:41.013 "supported_io_types": { 00:27:41.013 "read": true, 00:27:41.013 "write": true, 00:27:41.013 "unmap": false, 00:27:41.013 "flush": true, 00:27:41.013 "reset": true, 00:27:41.013 "nvme_admin": true, 00:27:41.013 "nvme_io": true, 00:27:41.013 "nvme_io_md": false, 00:27:41.013 "write_zeroes": true, 00:27:41.013 "zcopy": false, 00:27:41.013 "get_zone_info": false, 00:27:41.013 "zone_management": false, 00:27:41.013 "zone_append": false, 00:27:41.013 "compare": true, 00:27:41.013 "compare_and_write": true, 00:27:41.013 "abort": true, 00:27:41.013 "seek_hole": false, 00:27:41.013 "seek_data": false, 00:27:41.013 "copy": true, 00:27:41.013 "nvme_iov_md": false 00:27:41.013 }, 00:27:41.013 "memory_domains": [ 00:27:41.013 { 00:27:41.013 "dma_device_id": "system", 00:27:41.013 "dma_device_type": 1 00:27:41.013 } 00:27:41.013 ], 00:27:41.013 "driver_specific": { 00:27:41.013 "nvme": [ 00:27:41.013 { 00:27:41.013 "trid": { 00:27:41.013 "trtype": "TCP", 00:27:41.013 "adrfam": "IPv4", 00:27:41.013 "traddr": "10.0.0.2", 00:27:41.013 "trsvcid": "4421", 00:27:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:41.013 }, 00:27:41.013 "ctrlr_data": { 00:27:41.013 "cntlid": 3, 00:27:41.013 "vendor_id": "0x8086", 00:27:41.013 "model_number": "SPDK bdev Controller", 00:27:41.013 "serial_number": "00000000000000000000", 00:27:41.013 "firmware_revision": "24.09", 00:27:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.013 "oacs": { 00:27:41.013 "security": 0, 00:27:41.013 "format": 0, 00:27:41.013 "firmware": 0, 00:27:41.013 "ns_manage": 0 00:27:41.013 }, 00:27:41.013 "multi_ctrlr": true, 00:27:41.013 "ana_reporting": false 00:27:41.013 }, 00:27:41.013 "vs": { 00:27:41.013 "nvme_version": "1.3" 00:27:41.013 }, 00:27:41.013 "ns_data": { 00:27:41.013 "id": 1, 00:27:41.013 "can_share": true 00:27:41.013 } 00:27:41.013 } 00:27:41.013 ], 00:27:41.013 "mp_policy": "active_passive" 00:27:41.013 } 00:27:41.013 } 00:27:41.013 ] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.013 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:41.272 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.272 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.as9zvcQBWF 00:27:41.272 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.273 rmmod nvme_tcp 00:27:41.273 rmmod nvme_fabrics 00:27:41.273 rmmod nvme_keyring 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1197709 ']' 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1197709 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1197709 ']' 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1197709 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1197709 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1197709' 00:27:41.273 killing process with pid 1197709 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1197709 00:27:41.273 [2024-07-13 15:39:11.876292] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:41.273 [2024-07-13 15:39:11.876335] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:41.273 15:39:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1197709 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.531 15:39:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.436 15:39:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.436 00:27:43.436 real 0m5.301s 00:27:43.436 user 0m2.021s 00:27:43.436 sys 0m1.671s 00:27:43.436 15:39:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.436 15:39:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 ************************************ 00:27:43.436 END TEST nvmf_async_init 00:27:43.436 ************************************ 00:27:43.436 15:39:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:43.436 15:39:14 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:43.436 15:39:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:43.436 15:39:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.436 15:39:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 ************************************ 00:27:43.436 START TEST dma 00:27:43.436 ************************************ 00:27:43.436 15:39:14 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:43.695 * Looking for test storage... 00:27:43.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.695 15:39:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.695 15:39:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.695 15:39:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.695 15:39:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.695 15:39:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.695 15:39:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.695 15:39:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.695 15:39:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:43.695 15:39:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.695 15:39:14 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.695 15:39:14 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:43.695 15:39:14 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:43.695 00:27:43.695 real 0m0.074s 00:27:43.695 user 0m0.034s 00:27:43.695 sys 0m0.046s 00:27:43.695 15:39:14 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.695 15:39:14 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:43.695 ************************************ 00:27:43.695 END TEST dma 00:27:43.695 ************************************ 00:27:43.695 15:39:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:43.695 15:39:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:43.695 15:39:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:43.695 15:39:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.695 15:39:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.695 ************************************ 00:27:43.695 START TEST nvmf_identify 00:27:43.695 ************************************ 00:27:43.695 15:39:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:43.695 * Looking for test storage... 00:27:43.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.696 15:39:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:45.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:45.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:45.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:45.598 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.598 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:27:45.859 00:27:45.859 --- 10.0.0.2 ping statistics --- 00:27:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.859 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:27:45.859 00:27:45.859 --- 10.0.0.1 ping statistics --- 00:27:45.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.859 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1199800 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1199800 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1199800 ']' 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.859 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:45.859 [2024-07-13 15:39:16.508252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:45.859 [2024-07-13 15:39:16.508337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.859 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.859 [2024-07-13 15:39:16.548806] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:45.859 [2024-07-13 15:39:16.577465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.120 [2024-07-13 15:39:16.671391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.120 [2024-07-13 15:39:16.671449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.120 [2024-07-13 15:39:16.671469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.120 [2024-07-13 15:39:16.671486] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.120 [2024-07-13 15:39:16.671501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.120 [2024-07-13 15:39:16.671572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.120 [2024-07-13 15:39:16.671605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.120 [2024-07-13 15:39:16.671639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.120 [2024-07-13 15:39:16.671644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 [2024-07-13 15:39:16.798684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 Malloc0 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.120 [2024-07-13 15:39:16.880254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.120 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.384 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.384 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:46.384 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.384 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.384 [ 00:27:46.384 { 00:27:46.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:46.384 "subtype": "Discovery", 00:27:46.384 "listen_addresses": [ 00:27:46.384 { 00:27:46.384 "trtype": "TCP", 00:27:46.384 "adrfam": "IPv4", 00:27:46.384 "traddr": "10.0.0.2", 00:27:46.384 "trsvcid": "4420" 00:27:46.384 } 00:27:46.384 ], 00:27:46.384 "allow_any_host": true, 00:27:46.384 "hosts": [] 00:27:46.384 }, 00:27:46.384 { 00:27:46.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.384 "subtype": "NVMe", 00:27:46.384 "listen_addresses": [ 00:27:46.384 { 00:27:46.384 "trtype": "TCP", 00:27:46.384 "adrfam": "IPv4", 00:27:46.384 "traddr": "10.0.0.2", 00:27:46.384 "trsvcid": "4420" 00:27:46.384 } 00:27:46.384 ], 00:27:46.384 "allow_any_host": true, 00:27:46.384 "hosts": [], 00:27:46.384 "serial_number": "SPDK00000000000001", 00:27:46.384 "model_number": "SPDK bdev Controller", 00:27:46.384 "max_namespaces": 32, 00:27:46.384 "min_cntlid": 1, 00:27:46.384 "max_cntlid": 65519, 00:27:46.384 "namespaces": [ 00:27:46.384 { 00:27:46.384 "nsid": 1, 00:27:46.384 "bdev_name": "Malloc0", 00:27:46.384 "name": "Malloc0", 00:27:46.384 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:46.384 "eui64": "ABCDEF0123456789", 00:27:46.384 "uuid": "11c88329-4266-4884-b422-983daf94bdb1" 00:27:46.384 } 00:27:46.384 ] 00:27:46.384 } 00:27:46.384 ] 00:27:46.384 15:39:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.384 15:39:16 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:46.384 [2024-07-13 15:39:16.922636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:46.384 [2024-07-13 15:39:16.922679] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199859 ] 00:27:46.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.384 [2024-07-13 15:39:16.940639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:46.384 [2024-07-13 15:39:16.958294] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:46.384 [2024-07-13 15:39:16.958355] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:46.384 [2024-07-13 15:39:16.958365] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:46.384 [2024-07-13 15:39:16.958382] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:46.384 [2024-07-13 15:39:16.958393] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:46.384 [2024-07-13 15:39:16.961941] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:46.384 [2024-07-13 15:39:16.962013] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x216d630 0 00:27:46.384 [2024-07-13 15:39:16.968893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:46.384 [2024-07-13 15:39:16.968916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:46.384 [2024-07-13 15:39:16.968926] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:46.384 [2024-07-13 15:39:16.968932] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:46.384 [2024-07-13 15:39:16.969005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.969020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.969028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.384 [2024-07-13 15:39:16.969048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:46.384 [2024-07-13 15:39:16.969075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.384 [2024-07-13 15:39:16.976877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.384 [2024-07-13 15:39:16.976895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.384 [2024-07-13 15:39:16.976907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.976916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.384 [2024-07-13 15:39:16.976952] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:46.384 [2024-07-13 15:39:16.976966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:46.384 [2024-07-13 15:39:16.976977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:46.384 [2024-07-13 15:39:16.977001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.384 [2024-07-13 15:39:16.977029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.384 [2024-07-13 15:39:16.977053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.384 [2024-07-13 15:39:16.977211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.384 [2024-07-13 15:39:16.977223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.384 [2024-07-13 15:39:16.977231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.384 [2024-07-13 15:39:16.977247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:46.384 [2024-07-13 15:39:16.977260] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:46.384 [2024-07-13 15:39:16.977273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.384 [2024-07-13 15:39:16.977298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.384 [2024-07-13 15:39:16.977320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.384 [2024-07-13 15:39:16.977451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.384 [2024-07-13 15:39:16.977463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.384 [2024-07-13 15:39:16.977471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.384 [2024-07-13 15:39:16.977487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:46.384 [2024-07-13 15:39:16.977502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:46.384 [2024-07-13 15:39:16.977514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.384 [2024-07-13 15:39:16.977528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.384 [2024-07-13 15:39:16.977539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.384 [2024-07-13 15:39:16.977559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.977688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.977703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.977715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.977722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.977733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:46.385 [2024-07-13 15:39:16.977750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.977759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.977766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.977777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.385 [2024-07-13 15:39:16.977798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.977925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.977939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.977946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.977954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.977963] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:46.385 [2024-07-13 15:39:16.977972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:46.385 [2024-07-13 15:39:16.977985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:46.385 [2024-07-13 15:39:16.978096] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:46.385 [2024-07-13 15:39:16.978105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:46.385 [2024-07-13 15:39:16.978122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.978147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.385 [2024-07-13 15:39:16.978169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.978304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.978316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.978323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.978339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:46.385 [2024-07-13 15:39:16.978354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.978381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.385 [2024-07-13 15:39:16.978401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.978533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.978548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.978556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.978571] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:46.385 [2024-07-13 15:39:16.978580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:46.385 [2024-07-13 15:39:16.978594] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:46.385 [2024-07-13 15:39:16.978615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:46.385 [2024-07-13 15:39:16.978633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.978652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.385 [2024-07-13 15:39:16.978688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.978890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.385 [2024-07-13 15:39:16.978905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.385 [2024-07-13 15:39:16.978912] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978919] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216d630): datao=0, datal=4096, cccid=0 00:27:46.385 [2024-07-13 15:39:16.978928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21bbf80) on tqpair(0x216d630): expected_datao=0, payload_size=4096 00:27:46.385 [2024-07-13 15:39:16.978936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978949] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978958] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.978989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.979000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.979007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.979028] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:46.385 [2024-07-13 15:39:16.979042] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:46.385 [2024-07-13 15:39:16.979051] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:46.385 [2024-07-13 15:39:16.979060] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:46.385 [2024-07-13 15:39:16.979069] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:46.385 [2024-07-13 15:39:16.979077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:46.385 [2024-07-13 15:39:16.979093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:46.385 [2024-07-13 15:39:16.979106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.979137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:46.385 [2024-07-13 15:39:16.979159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.979305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.979317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.979324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.979346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.979371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.385 [2024-07-13 15:39:16.979381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.979404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.385 [2024-07-13 15:39:16.979414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.979436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.385 [2024-07-13 15:39:16.979446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.979483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.385 [2024-07-13 15:39:16.979492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:46.385 [2024-07-13 15:39:16.979511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:46.385 [2024-07-13 15:39:16.979524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216d630) 00:27:46.385 [2024-07-13 15:39:16.979542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.385 [2024-07-13 15:39:16.979563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bbf80, cid 0, qid 0 00:27:46.385 [2024-07-13 15:39:16.979590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc100, cid 1, qid 0 00:27:46.385 [2024-07-13 15:39:16.979598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc280, cid 2, qid 0 00:27:46.385 [2024-07-13 15:39:16.979606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.385 [2024-07-13 15:39:16.979614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc580, cid 4, qid 0 00:27:46.385 [2024-07-13 15:39:16.979774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.385 [2024-07-13 15:39:16.979790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.385 [2024-07-13 15:39:16.979797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.385 [2024-07-13 15:39:16.979804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc580) on tqpair=0x216d630 00:27:46.385 [2024-07-13 15:39:16.979814] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:46.386 [2024-07-13 15:39:16.979824] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:46.386 [2024-07-13 15:39:16.979841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.979851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216d630) 00:27:46.386 [2024-07-13 15:39:16.979862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.386 [2024-07-13 15:39:16.979891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc580, cid 4, qid 0 00:27:46.386 [2024-07-13 15:39:16.980036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.386 [2024-07-13 15:39:16.980049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.386 [2024-07-13 15:39:16.980056] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980063] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216d630): datao=0, datal=4096, cccid=4 00:27:46.386 [2024-07-13 15:39:16.980071] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21bc580) on tqpair(0x216d630): expected_datao=0, payload_size=4096 00:27:46.386 [2024-07-13 15:39:16.980078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980089] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980096] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.386 [2024-07-13 15:39:16.980141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.386 [2024-07-13 15:39:16.980148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc580) on tqpair=0x216d630 00:27:46.386 [2024-07-13 15:39:16.980174] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:46.386 [2024-07-13 15:39:16.980215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216d630) 00:27:46.386 [2024-07-13 15:39:16.980237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.386 [2024-07-13 15:39:16.980249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x216d630) 00:27:46.386 [2024-07-13 15:39:16.980272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.386 [2024-07-13 15:39:16.980299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc580, cid 4, qid 0 00:27:46.386 [2024-07-13 15:39:16.980310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc700, cid 5, qid 0 00:27:46.386 [2024-07-13 15:39:16.980484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.386 [2024-07-13 15:39:16.980497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.386 [2024-07-13 15:39:16.980504] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980514] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216d630): datao=0, datal=1024, cccid=4 00:27:46.386 [2024-07-13 15:39:16.980523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21bc580) on tqpair(0x216d630): expected_datao=0, payload_size=1024 00:27:46.386 [2024-07-13 15:39:16.980531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980541] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980548] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.386 [2024-07-13 15:39:16.980566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.386 [2024-07-13 15:39:16.980573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:16.980580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc700) on tqpair=0x216d630 00:27:46.386 [2024-07-13 15:39:17.022877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.386 [2024-07-13 15:39:17.022898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.386 [2024-07-13 15:39:17.022907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.022914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc580) on tqpair=0x216d630 00:27:46.386 [2024-07-13 15:39:17.022936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.022945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216d630) 00:27:46.386 [2024-07-13 15:39:17.022957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.386 [2024-07-13 15:39:17.022988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc580, cid 4, qid 0 00:27:46.386 [2024-07-13 15:39:17.023148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.386 [2024-07-13 15:39:17.023164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.386 [2024-07-13 15:39:17.023172] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023178] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216d630): datao=0, datal=3072, cccid=4 00:27:46.386 [2024-07-13 15:39:17.023187] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21bc580) on tqpair(0x216d630): expected_datao=0, payload_size=3072 00:27:46.386 [2024-07-13 15:39:17.023194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023213] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.386 [2024-07-13 15:39:17.023258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.386 [2024-07-13 15:39:17.023266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc580) on tqpair=0x216d630 00:27:46.386 [2024-07-13 15:39:17.023288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216d630) 00:27:46.386 [2024-07-13 15:39:17.023307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.386 [2024-07-13 15:39:17.023335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc580, cid 4, qid 0 00:27:46.386 [2024-07-13 15:39:17.023482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.386 [2024-07-13 15:39:17.023494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.386 [2024-07-13 15:39:17.023501] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023515] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216d630): datao=0, datal=8, cccid=4 00:27:46.386 [2024-07-13 15:39:17.023524] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21bc580) on tqpair(0x216d630): expected_datao=0, payload_size=8 00:27:46.386 [2024-07-13 15:39:17.023531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023541] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.023549] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.063985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.386 [2024-07-13 15:39:17.064005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.386 [2024-07-13 15:39:17.064013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.386 [2024-07-13 15:39:17.064021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc580) on tqpair=0x216d630 00:27:46.386 ===================================================== 00:27:46.386 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:46.386 ===================================================== 00:27:46.386 Controller Capabilities/Features 00:27:46.386 ================================ 00:27:46.386 Vendor ID: 0000 00:27:46.386 Subsystem Vendor ID: 0000 00:27:46.386 Serial Number: .................... 00:27:46.386 Model Number: ........................................ 00:27:46.386 Firmware Version: 24.09 00:27:46.386 Recommended Arb Burst: 0 00:27:46.386 IEEE OUI Identifier: 00 00 00 00:27:46.386 Multi-path I/O 00:27:46.386 May have multiple subsystem ports: No 00:27:46.386 May have multiple controllers: No 00:27:46.386 Associated with SR-IOV VF: No 00:27:46.386 Max Data Transfer Size: 131072 00:27:46.386 Max Number of Namespaces: 0 00:27:46.386 Max Number of I/O Queues: 1024 00:27:46.386 NVMe Specification Version (VS): 1.3 00:27:46.386 NVMe Specification Version (Identify): 1.3 00:27:46.386 Maximum Queue Entries: 128 00:27:46.386 Contiguous Queues Required: Yes 00:27:46.386 Arbitration Mechanisms Supported 00:27:46.386 Weighted Round Robin: Not Supported 00:27:46.386 Vendor Specific: Not Supported 00:27:46.386 Reset Timeout: 15000 ms 00:27:46.386 Doorbell Stride: 4 bytes 00:27:46.386 NVM Subsystem Reset: Not Supported 00:27:46.386 Command Sets Supported 00:27:46.386 NVM Command Set: Supported 00:27:46.386 Boot Partition: Not Supported 00:27:46.386 Memory Page Size Minimum: 4096 bytes 00:27:46.386 Memory Page Size Maximum: 4096 bytes 00:27:46.386 Persistent Memory Region: Not Supported 00:27:46.386 Optional Asynchronous Events Supported 00:27:46.386 Namespace Attribute Notices: Not Supported 00:27:46.386 Firmware Activation Notices: Not Supported 00:27:46.386 ANA Change Notices: Not Supported 00:27:46.386 PLE Aggregate Log Change Notices: Not Supported 00:27:46.386 LBA Status Info Alert Notices: Not Supported 00:27:46.386 EGE Aggregate Log Change Notices: Not Supported 00:27:46.386 Normal NVM Subsystem Shutdown event: Not Supported 00:27:46.386 Zone Descriptor Change Notices: Not Supported 00:27:46.386 Discovery Log Change Notices: Supported 00:27:46.386 Controller Attributes 00:27:46.386 128-bit Host Identifier: Not Supported 00:27:46.386 Non-Operational Permissive Mode: Not Supported 00:27:46.386 NVM Sets: Not Supported 00:27:46.386 Read Recovery Levels: Not Supported 00:27:46.386 Endurance Groups: Not Supported 00:27:46.386 Predictable Latency Mode: Not Supported 00:27:46.386 Traffic Based Keep ALive: Not Supported 00:27:46.386 Namespace Granularity: Not Supported 00:27:46.386 SQ Associations: Not Supported 00:27:46.386 UUID List: Not Supported 00:27:46.386 Multi-Domain Subsystem: Not Supported 00:27:46.386 Fixed Capacity Management: Not Supported 00:27:46.386 Variable Capacity Management: Not Supported 00:27:46.386 Delete Endurance Group: Not Supported 00:27:46.386 Delete NVM Set: Not Supported 00:27:46.386 Extended LBA Formats Supported: Not Supported 00:27:46.386 Flexible Data Placement Supported: Not Supported 00:27:46.386 00:27:46.386 Controller Memory Buffer Support 00:27:46.386 ================================ 00:27:46.386 Supported: No 00:27:46.386 00:27:46.387 Persistent Memory Region Support 00:27:46.387 ================================ 00:27:46.387 Supported: No 00:27:46.387 00:27:46.387 Admin Command Set Attributes 00:27:46.387 ============================ 00:27:46.387 Security Send/Receive: Not Supported 00:27:46.387 Format NVM: Not Supported 00:27:46.387 Firmware Activate/Download: Not Supported 00:27:46.387 Namespace Management: Not Supported 00:27:46.387 Device Self-Test: Not Supported 00:27:46.387 Directives: Not Supported 00:27:46.387 NVMe-MI: Not Supported 00:27:46.387 Virtualization Management: Not Supported 00:27:46.387 Doorbell Buffer Config: Not Supported 00:27:46.387 Get LBA Status Capability: Not Supported 00:27:46.387 Command & Feature Lockdown Capability: Not Supported 00:27:46.387 Abort Command Limit: 1 00:27:46.387 Async Event Request Limit: 4 00:27:46.387 Number of Firmware Slots: N/A 00:27:46.387 Firmware Slot 1 Read-Only: N/A 00:27:46.387 Firmware Activation Without Reset: N/A 00:27:46.387 Multiple Update Detection Support: N/A 00:27:46.387 Firmware Update Granularity: No Information Provided 00:27:46.387 Per-Namespace SMART Log: No 00:27:46.387 Asymmetric Namespace Access Log Page: Not Supported 00:27:46.387 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:46.387 Command Effects Log Page: Not Supported 00:27:46.387 Get Log Page Extended Data: Supported 00:27:46.387 Telemetry Log Pages: Not Supported 00:27:46.387 Persistent Event Log Pages: Not Supported 00:27:46.387 Supported Log Pages Log Page: May Support 00:27:46.387 Commands Supported & Effects Log Page: Not Supported 00:27:46.387 Feature Identifiers & Effects Log Page:May Support 00:27:46.387 NVMe-MI Commands & Effects Log Page: May Support 00:27:46.387 Data Area 4 for Telemetry Log: Not Supported 00:27:46.387 Error Log Page Entries Supported: 128 00:27:46.387 Keep Alive: Not Supported 00:27:46.387 00:27:46.387 NVM Command Set Attributes 00:27:46.387 ========================== 00:27:46.387 Submission Queue Entry Size 00:27:46.387 Max: 1 00:27:46.387 Min: 1 00:27:46.387 Completion Queue Entry Size 00:27:46.387 Max: 1 00:27:46.387 Min: 1 00:27:46.387 Number of Namespaces: 0 00:27:46.387 Compare Command: Not Supported 00:27:46.387 Write Uncorrectable Command: Not Supported 00:27:46.387 Dataset Management Command: Not Supported 00:27:46.387 Write Zeroes Command: Not Supported 00:27:46.387 Set Features Save Field: Not Supported 00:27:46.387 Reservations: Not Supported 00:27:46.387 Timestamp: Not Supported 00:27:46.387 Copy: Not Supported 00:27:46.387 Volatile Write Cache: Not Present 00:27:46.387 Atomic Write Unit (Normal): 1 00:27:46.387 Atomic Write Unit (PFail): 1 00:27:46.387 Atomic Compare & Write Unit: 1 00:27:46.387 Fused Compare & Write: Supported 00:27:46.387 Scatter-Gather List 00:27:46.387 SGL Command Set: Supported 00:27:46.387 SGL Keyed: Supported 00:27:46.387 SGL Bit Bucket Descriptor: Not Supported 00:27:46.387 SGL Metadata Pointer: Not Supported 00:27:46.387 Oversized SGL: Not Supported 00:27:46.387 SGL Metadata Address: Not Supported 00:27:46.387 SGL Offset: Supported 00:27:46.387 Transport SGL Data Block: Not Supported 00:27:46.387 Replay Protected Memory Block: Not Supported 00:27:46.387 00:27:46.387 Firmware Slot Information 00:27:46.387 ========================= 00:27:46.387 Active slot: 0 00:27:46.387 00:27:46.387 00:27:46.387 Error Log 00:27:46.387 ========= 00:27:46.387 00:27:46.387 Active Namespaces 00:27:46.387 ================= 00:27:46.387 Discovery Log Page 00:27:46.387 ================== 00:27:46.387 Generation Counter: 2 00:27:46.387 Number of Records: 2 00:27:46.387 Record Format: 0 00:27:46.387 00:27:46.387 Discovery Log Entry 0 00:27:46.387 ---------------------- 00:27:46.387 Transport Type: 3 (TCP) 00:27:46.387 Address Family: 1 (IPv4) 00:27:46.387 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:46.387 Entry Flags: 00:27:46.387 Duplicate Returned Information: 1 00:27:46.387 Explicit Persistent Connection Support for Discovery: 1 00:27:46.387 Transport Requirements: 00:27:46.387 Secure Channel: Not Required 00:27:46.387 Port ID: 0 (0x0000) 00:27:46.387 Controller ID: 65535 (0xffff) 00:27:46.387 Admin Max SQ Size: 128 00:27:46.387 Transport Service Identifier: 4420 00:27:46.387 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:46.387 Transport Address: 10.0.0.2 00:27:46.387 Discovery Log Entry 1 00:27:46.387 ---------------------- 00:27:46.387 Transport Type: 3 (TCP) 00:27:46.387 Address Family: 1 (IPv4) 00:27:46.387 Subsystem Type: 2 (NVM Subsystem) 00:27:46.387 Entry Flags: 00:27:46.387 Duplicate Returned Information: 0 00:27:46.387 Explicit Persistent Connection Support for Discovery: 0 00:27:46.387 Transport Requirements: 00:27:46.387 Secure Channel: Not Required 00:27:46.387 Port ID: 0 (0x0000) 00:27:46.387 Controller ID: 65535 (0xffff) 00:27:46.387 Admin Max SQ Size: 128 00:27:46.387 Transport Service Identifier: 4420 00:27:46.387 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:46.387 Transport Address: 10.0.0.2 [2024-07-13 15:39:17.064139] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:46.387 [2024-07-13 15:39:17.064163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bbf80) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.064177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.387 [2024-07-13 15:39:17.064187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc100) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.064195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.387 [2024-07-13 15:39:17.064203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc280) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.064211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.387 [2024-07-13 15:39:17.064219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.064227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.387 [2024-07-13 15:39:17.064246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.387 [2024-07-13 15:39:17.064288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.387 [2024-07-13 15:39:17.064314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.387 [2024-07-13 15:39:17.064458] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.387 [2024-07-13 15:39:17.064471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.387 [2024-07-13 15:39:17.064478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.064499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.387 [2024-07-13 15:39:17.064524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.387 [2024-07-13 15:39:17.064551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.387 [2024-07-13 15:39:17.064699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.387 [2024-07-13 15:39:17.064714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.387 [2024-07-13 15:39:17.064722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.064743] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:46.387 [2024-07-13 15:39:17.064753] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:46.387 [2024-07-13 15:39:17.064770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.064786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.387 [2024-07-13 15:39:17.064797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.387 [2024-07-13 15:39:17.064818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.387 [2024-07-13 15:39:17.064992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.387 [2024-07-13 15:39:17.065007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.387 [2024-07-13 15:39:17.065014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.065022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.387 [2024-07-13 15:39:17.065040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.065049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.065056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.387 [2024-07-13 15:39:17.065067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.387 [2024-07-13 15:39:17.065088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.387 [2024-07-13 15:39:17.065221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.387 [2024-07-13 15:39:17.065237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.387 [2024-07-13 15:39:17.065244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.387 [2024-07-13 15:39:17.065251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.065268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.065294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.065315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.065453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.065468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.065475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.065499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.065525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.065546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.065673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.065690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.065698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.065722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.065748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.065769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.065908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.065924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.065931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.065955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.065971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.065982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.066003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.066137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.066149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.066156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.066179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.066206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.066226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.066360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.066375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.066382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.066406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.066432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.066453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.066587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.066603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.066614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.066638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.066665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.066686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.066816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.066831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.066838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.066862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.066885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.066896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.066918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.067052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.067067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.067075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.067098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.067125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.067146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.067287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.067299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.067306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.067329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.388 [2024-07-13 15:39:17.067355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.388 [2024-07-13 15:39:17.067376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.388 [2024-07-13 15:39:17.067517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.388 [2024-07-13 15:39:17.067529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.388 [2024-07-13 15:39:17.067536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.388 [2024-07-13 15:39:17.067564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.388 [2024-07-13 15:39:17.067573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.067580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.389 [2024-07-13 15:39:17.067590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.389 [2024-07-13 15:39:17.067611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.389 [2024-07-13 15:39:17.067745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.389 [2024-07-13 15:39:17.067760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.389 [2024-07-13 15:39:17.067767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.067775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.389 [2024-07-13 15:39:17.067791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.067800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.067807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.389 [2024-07-13 15:39:17.067817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.389 [2024-07-13 15:39:17.067838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.389 [2024-07-13 15:39:17.071878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.389 [2024-07-13 15:39:17.071895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.389 [2024-07-13 15:39:17.071903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.071910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.389 [2024-07-13 15:39:17.071928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.071937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.071944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216d630) 00:27:46.389 [2024-07-13 15:39:17.071955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.389 [2024-07-13 15:39:17.071977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21bc400, cid 3, qid 0 00:27:46.389 [2024-07-13 15:39:17.072117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.389 [2024-07-13 15:39:17.072129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.389 [2024-07-13 15:39:17.072136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.389 [2024-07-13 15:39:17.072144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21bc400) on tqpair=0x216d630 00:27:46.389 [2024-07-13 15:39:17.072157] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:46.389 00:27:46.389 15:39:17 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:46.389 [2024-07-13 15:39:17.107085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:46.389 [2024-07-13 15:39:17.107129] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199861 ] 00:27:46.389 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.389 [2024-07-13 15:39:17.125319] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:46.389 [2024-07-13 15:39:17.142817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:46.389 [2024-07-13 15:39:17.142892] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:46.389 [2024-07-13 15:39:17.142904] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:46.389 [2024-07-13 15:39:17.142922] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:46.389 [2024-07-13 15:39:17.142933] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:46.389 [2024-07-13 15:39:17.143178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:46.389 [2024-07-13 15:39:17.143229] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5a0630 0 00:27:46.653 [2024-07-13 15:39:17.149900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:46.653 [2024-07-13 15:39:17.149920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:46.653 [2024-07-13 15:39:17.149928] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:46.653 [2024-07-13 15:39:17.149935] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:46.653 [2024-07-13 15:39:17.149988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.653 [2024-07-13 15:39:17.150001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.653 [2024-07-13 15:39:17.150009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.653 [2024-07-13 15:39:17.150024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:46.653 [2024-07-13 15:39:17.150050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.653 [2024-07-13 15:39:17.157885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.653 [2024-07-13 15:39:17.157904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.653 [2024-07-13 15:39:17.157911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.653 [2024-07-13 15:39:17.157919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.653 [2024-07-13 15:39:17.157936] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:46.653 [2024-07-13 15:39:17.157963] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:46.653 [2024-07-13 15:39:17.157973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:46.653 [2024-07-13 15:39:17.157992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.653 [2024-07-13 15:39:17.158001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.158019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.158044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.158212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.158227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.158234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.158249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:46.654 [2024-07-13 15:39:17.158268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:46.654 [2024-07-13 15:39:17.158282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.158329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.158352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.158583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.158598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.158605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.158621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:46.654 [2024-07-13 15:39:17.158635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:46.654 [2024-07-13 15:39:17.158648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.158673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.158694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.158832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.158844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.158851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.158874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:46.654 [2024-07-13 15:39:17.158892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.158908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.158919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.158940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.159066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.159081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.159088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.159102] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:46.654 [2024-07-13 15:39:17.159111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:46.654 [2024-07-13 15:39:17.159125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:46.654 [2024-07-13 15:39:17.159240] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:46.654 [2024-07-13 15:39:17.159248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:46.654 [2024-07-13 15:39:17.159261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.159285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.159306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.159472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.159487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.159494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.159509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:46.654 [2024-07-13 15:39:17.159525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.159552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.159572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.159694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.159709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.159716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.159731] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:46.654 [2024-07-13 15:39:17.159739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:46.654 [2024-07-13 15:39:17.159753] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:46.654 [2024-07-13 15:39:17.159767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:46.654 [2024-07-13 15:39:17.159782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.159790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.159801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.654 [2024-07-13 15:39:17.159822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.160009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.654 [2024-07-13 15:39:17.160023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.654 [2024-07-13 15:39:17.160030] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=4096, cccid=0 00:27:46.654 [2024-07-13 15:39:17.160049] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5eef80) on tqpair(0x5a0630): expected_datao=0, payload_size=4096 00:27:46.654 [2024-07-13 15:39:17.160057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160068] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160075] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.160109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.160115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.160133] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:46.654 [2024-07-13 15:39:17.160146] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:46.654 [2024-07-13 15:39:17.160154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:46.654 [2024-07-13 15:39:17.160161] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:46.654 [2024-07-13 15:39:17.160169] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:46.654 [2024-07-13 15:39:17.160177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:46.654 [2024-07-13 15:39:17.160191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:46.654 [2024-07-13 15:39:17.160203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.160228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:46.654 [2024-07-13 15:39:17.160249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.654 [2024-07-13 15:39:17.160380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.654 [2024-07-13 15:39:17.160395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.654 [2024-07-13 15:39:17.160402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.654 [2024-07-13 15:39:17.160421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5a0630) 00:27:46.654 [2024-07-13 15:39:17.160445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.654 [2024-07-13 15:39:17.160455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.654 [2024-07-13 15:39:17.160468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.160477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.655 [2024-07-13 15:39:17.160487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.160512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.655 [2024-07-13 15:39:17.160523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.160560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.655 [2024-07-13 15:39:17.160569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.160589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.160602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.160618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.655 [2024-07-13 15:39:17.160640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5eef80, cid 0, qid 0 00:27:46.655 [2024-07-13 15:39:17.160667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef100, cid 1, qid 0 00:27:46.655 [2024-07-13 15:39:17.160675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef280, cid 2, qid 0 00:27:46.655 [2024-07-13 15:39:17.160683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef400, cid 3, qid 0 00:27:46.655 [2024-07-13 15:39:17.160690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.655 [2024-07-13 15:39:17.160874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.655 [2024-07-13 15:39:17.160890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.655 [2024-07-13 15:39:17.160897] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.655 [2024-07-13 15:39:17.160913] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:46.655 [2024-07-13 15:39:17.160922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.160937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.160949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.160960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.160973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.160984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:46.655 [2024-07-13 15:39:17.161005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.655 [2024-07-13 15:39:17.161165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.655 [2024-07-13 15:39:17.161180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.655 [2024-07-13 15:39:17.161187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.655 [2024-07-13 15:39:17.161262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.161282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.161297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.161330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.655 [2024-07-13 15:39:17.161351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.655 [2024-07-13 15:39:17.161551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.655 [2024-07-13 15:39:17.161566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.655 [2024-07-13 15:39:17.161573] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161580] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=4096, cccid=4 00:27:46.655 [2024-07-13 15:39:17.161588] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ef580) on tqpair(0x5a0630): expected_datao=0, payload_size=4096 00:27:46.655 [2024-07-13 15:39:17.161595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161612] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161621] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.655 [2024-07-13 15:39:17.161708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.655 [2024-07-13 15:39:17.161715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.655 [2024-07-13 15:39:17.161739] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:46.655 [2024-07-13 15:39:17.161761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.161780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.161793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.161801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.161811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.655 [2024-07-13 15:39:17.161832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.655 [2024-07-13 15:39:17.165877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.655 [2024-07-13 15:39:17.165904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.655 [2024-07-13 15:39:17.165911] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.165918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=4096, cccid=4 00:27:46.655 [2024-07-13 15:39:17.165925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ef580) on tqpair(0x5a0630): expected_datao=0, payload_size=4096 00:27:46.655 [2024-07-13 15:39:17.165932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.165942] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.165949] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.165958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.655 [2024-07-13 15:39:17.165970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.655 [2024-07-13 15:39:17.165978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.165984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.655 [2024-07-13 15:39:17.166007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.166076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.655 [2024-07-13 15:39:17.166099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.655 [2024-07-13 15:39:17.166276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.655 [2024-07-13 15:39:17.166288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.655 [2024-07-13 15:39:17.166295] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166301] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=4096, cccid=4 00:27:46.655 [2024-07-13 15:39:17.166309] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ef580) on tqpair(0x5a0630): expected_datao=0, payload_size=4096 00:27:46.655 [2024-07-13 15:39:17.166317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166338] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166347] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.655 [2024-07-13 15:39:17.166443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.655 [2024-07-13 15:39:17.166449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.655 [2024-07-13 15:39:17.166470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166542] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:46.655 [2024-07-13 15:39:17.166550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:46.655 [2024-07-13 15:39:17.166558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:46.655 [2024-07-13 15:39:17.166578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.655 [2024-07-13 15:39:17.166586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.655 [2024-07-13 15:39:17.166600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.655 [2024-07-13 15:39:17.166612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.166619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.166625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.166649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:46.656 [2024-07-13 15:39:17.166674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.656 [2024-07-13 15:39:17.166685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef700, cid 5, qid 0 00:27:46.656 [2024-07-13 15:39:17.166883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.166898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.166905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.166912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.166922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.166931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.166937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.166944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef700) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.166959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.166968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.166979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef700, cid 5, qid 0 00:27:46.656 [2024-07-13 15:39:17.167167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.167179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.167186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef700) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.167208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.167227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef700, cid 5, qid 0 00:27:46.656 [2024-07-13 15:39:17.167378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.167393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.167400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef700) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.167422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.167442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef700, cid 5, qid 0 00:27:46.656 [2024-07-13 15:39:17.167592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.167607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.167614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef700) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.167644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.167666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.167694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.167722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.167756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5a0630) 00:27:46.656 [2024-07-13 15:39:17.167766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.656 [2024-07-13 15:39:17.167787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef700, cid 5, qid 0 00:27:46.656 [2024-07-13 15:39:17.167798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef580, cid 4, qid 0 00:27:46.656 [2024-07-13 15:39:17.167821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef880, cid 6, qid 0 00:27:46.656 [2024-07-13 15:39:17.167829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5efa00, cid 7, qid 0 00:27:46.656 [2024-07-13 15:39:17.168231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.656 [2024-07-13 15:39:17.168248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.656 [2024-07-13 15:39:17.168255] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168262] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=8192, cccid=5 00:27:46.656 [2024-07-13 15:39:17.168269] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ef700) on tqpair(0x5a0630): expected_datao=0, payload_size=8192 00:27:46.656 [2024-07-13 15:39:17.168277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168287] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168295] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.656 [2024-07-13 15:39:17.168312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.656 [2024-07-13 15:39:17.168319] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168325] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=512, cccid=4 00:27:46.656 [2024-07-13 15:39:17.168333] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ef580) on tqpair(0x5a0630): expected_datao=0, payload_size=512 00:27:46.656 [2024-07-13 15:39:17.168340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168353] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168361] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.656 [2024-07-13 15:39:17.168378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.656 [2024-07-13 15:39:17.168385] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168391] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=512, cccid=6 00:27:46.656 [2024-07-13 15:39:17.168399] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5ef880) on tqpair(0x5a0630): expected_datao=0, payload_size=512 00:27:46.656 [2024-07-13 15:39:17.168406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168415] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168422] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:46.656 [2024-07-13 15:39:17.168439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:46.656 [2024-07-13 15:39:17.168446] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168452] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5a0630): datao=0, datal=4096, cccid=7 00:27:46.656 [2024-07-13 15:39:17.168460] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5efa00) on tqpair(0x5a0630): expected_datao=0, payload_size=4096 00:27:46.656 [2024-07-13 15:39:17.168467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168477] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168484] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.168505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.168512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef700) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.168552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.168563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.168569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef580) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.168590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.168599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.168606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef880) on tqpair=0x5a0630 00:27:46.656 [2024-07-13 15:39:17.168637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.656 [2024-07-13 15:39:17.168646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.656 [2024-07-13 15:39:17.168652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.656 [2024-07-13 15:39:17.168658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5efa00) on tqpair=0x5a0630 00:27:46.656 ===================================================== 00:27:46.656 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.656 ===================================================== 00:27:46.656 Controller Capabilities/Features 00:27:46.656 ================================ 00:27:46.656 Vendor ID: 8086 00:27:46.656 Subsystem Vendor ID: 8086 00:27:46.656 Serial Number: SPDK00000000000001 00:27:46.656 Model Number: SPDK bdev Controller 00:27:46.656 Firmware Version: 24.09 00:27:46.656 Recommended Arb Burst: 6 00:27:46.656 IEEE OUI Identifier: e4 d2 5c 00:27:46.656 Multi-path I/O 00:27:46.656 May have multiple subsystem ports: Yes 00:27:46.656 May have multiple controllers: Yes 00:27:46.656 Associated with SR-IOV VF: No 00:27:46.656 Max Data Transfer Size: 131072 00:27:46.656 Max Number of Namespaces: 32 00:27:46.656 Max Number of I/O Queues: 127 00:27:46.656 NVMe Specification Version (VS): 1.3 00:27:46.656 NVMe Specification Version (Identify): 1.3 00:27:46.656 Maximum Queue Entries: 128 00:27:46.657 Contiguous Queues Required: Yes 00:27:46.657 Arbitration Mechanisms Supported 00:27:46.657 Weighted Round Robin: Not Supported 00:27:46.657 Vendor Specific: Not Supported 00:27:46.657 Reset Timeout: 15000 ms 00:27:46.657 Doorbell Stride: 4 bytes 00:27:46.657 NVM Subsystem Reset: Not Supported 00:27:46.657 Command Sets Supported 00:27:46.657 NVM Command Set: Supported 00:27:46.657 Boot Partition: Not Supported 00:27:46.657 Memory Page Size Minimum: 4096 bytes 00:27:46.657 Memory Page Size Maximum: 4096 bytes 00:27:46.657 Persistent Memory Region: Not Supported 00:27:46.657 Optional Asynchronous Events Supported 00:27:46.657 Namespace Attribute Notices: Supported 00:27:46.657 Firmware Activation Notices: Not Supported 00:27:46.657 ANA Change Notices: Not Supported 00:27:46.657 PLE Aggregate Log Change Notices: Not Supported 00:27:46.657 LBA Status Info Alert Notices: Not Supported 00:27:46.657 EGE Aggregate Log Change Notices: Not Supported 00:27:46.657 Normal NVM Subsystem Shutdown event: Not Supported 00:27:46.657 Zone Descriptor Change Notices: Not Supported 00:27:46.657 Discovery Log Change Notices: Not Supported 00:27:46.657 Controller Attributes 00:27:46.657 128-bit Host Identifier: Supported 00:27:46.657 Non-Operational Permissive Mode: Not Supported 00:27:46.657 NVM Sets: Not Supported 00:27:46.657 Read Recovery Levels: Not Supported 00:27:46.657 Endurance Groups: Not Supported 00:27:46.657 Predictable Latency Mode: Not Supported 00:27:46.657 Traffic Based Keep ALive: Not Supported 00:27:46.657 Namespace Granularity: Not Supported 00:27:46.657 SQ Associations: Not Supported 00:27:46.657 UUID List: Not Supported 00:27:46.657 Multi-Domain Subsystem: Not Supported 00:27:46.657 Fixed Capacity Management: Not Supported 00:27:46.657 Variable Capacity Management: Not Supported 00:27:46.657 Delete Endurance Group: Not Supported 00:27:46.657 Delete NVM Set: Not Supported 00:27:46.657 Extended LBA Formats Supported: Not Supported 00:27:46.657 Flexible Data Placement Supported: Not Supported 00:27:46.657 00:27:46.657 Controller Memory Buffer Support 00:27:46.657 ================================ 00:27:46.657 Supported: No 00:27:46.657 00:27:46.657 Persistent Memory Region Support 00:27:46.657 ================================ 00:27:46.657 Supported: No 00:27:46.657 00:27:46.657 Admin Command Set Attributes 00:27:46.657 ============================ 00:27:46.657 Security Send/Receive: Not Supported 00:27:46.657 Format NVM: Not Supported 00:27:46.657 Firmware Activate/Download: Not Supported 00:27:46.657 Namespace Management: Not Supported 00:27:46.657 Device Self-Test: Not Supported 00:27:46.657 Directives: Not Supported 00:27:46.657 NVMe-MI: Not Supported 00:27:46.657 Virtualization Management: Not Supported 00:27:46.657 Doorbell Buffer Config: Not Supported 00:27:46.657 Get LBA Status Capability: Not Supported 00:27:46.657 Command & Feature Lockdown Capability: Not Supported 00:27:46.657 Abort Command Limit: 4 00:27:46.657 Async Event Request Limit: 4 00:27:46.657 Number of Firmware Slots: N/A 00:27:46.657 Firmware Slot 1 Read-Only: N/A 00:27:46.657 Firmware Activation Without Reset: N/A 00:27:46.657 Multiple Update Detection Support: N/A 00:27:46.657 Firmware Update Granularity: No Information Provided 00:27:46.657 Per-Namespace SMART Log: No 00:27:46.657 Asymmetric Namespace Access Log Page: Not Supported 00:27:46.657 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:46.657 Command Effects Log Page: Supported 00:27:46.657 Get Log Page Extended Data: Supported 00:27:46.657 Telemetry Log Pages: Not Supported 00:27:46.657 Persistent Event Log Pages: Not Supported 00:27:46.657 Supported Log Pages Log Page: May Support 00:27:46.657 Commands Supported & Effects Log Page: Not Supported 00:27:46.657 Feature Identifiers & Effects Log Page:May Support 00:27:46.657 NVMe-MI Commands & Effects Log Page: May Support 00:27:46.657 Data Area 4 for Telemetry Log: Not Supported 00:27:46.657 Error Log Page Entries Supported: 128 00:27:46.657 Keep Alive: Supported 00:27:46.657 Keep Alive Granularity: 10000 ms 00:27:46.657 00:27:46.657 NVM Command Set Attributes 00:27:46.657 ========================== 00:27:46.657 Submission Queue Entry Size 00:27:46.657 Max: 64 00:27:46.657 Min: 64 00:27:46.657 Completion Queue Entry Size 00:27:46.657 Max: 16 00:27:46.657 Min: 16 00:27:46.657 Number of Namespaces: 32 00:27:46.657 Compare Command: Supported 00:27:46.657 Write Uncorrectable Command: Not Supported 00:27:46.657 Dataset Management Command: Supported 00:27:46.657 Write Zeroes Command: Supported 00:27:46.657 Set Features Save Field: Not Supported 00:27:46.657 Reservations: Supported 00:27:46.657 Timestamp: Not Supported 00:27:46.657 Copy: Supported 00:27:46.657 Volatile Write Cache: Present 00:27:46.657 Atomic Write Unit (Normal): 1 00:27:46.657 Atomic Write Unit (PFail): 1 00:27:46.657 Atomic Compare & Write Unit: 1 00:27:46.657 Fused Compare & Write: Supported 00:27:46.657 Scatter-Gather List 00:27:46.657 SGL Command Set: Supported 00:27:46.657 SGL Keyed: Supported 00:27:46.657 SGL Bit Bucket Descriptor: Not Supported 00:27:46.657 SGL Metadata Pointer: Not Supported 00:27:46.657 Oversized SGL: Not Supported 00:27:46.657 SGL Metadata Address: Not Supported 00:27:46.657 SGL Offset: Supported 00:27:46.657 Transport SGL Data Block: Not Supported 00:27:46.657 Replay Protected Memory Block: Not Supported 00:27:46.657 00:27:46.657 Firmware Slot Information 00:27:46.657 ========================= 00:27:46.657 Active slot: 1 00:27:46.657 Slot 1 Firmware Revision: 24.09 00:27:46.657 00:27:46.657 00:27:46.657 Commands Supported and Effects 00:27:46.657 ============================== 00:27:46.657 Admin Commands 00:27:46.657 -------------- 00:27:46.657 Get Log Page (02h): Supported 00:27:46.657 Identify (06h): Supported 00:27:46.657 Abort (08h): Supported 00:27:46.657 Set Features (09h): Supported 00:27:46.657 Get Features (0Ah): Supported 00:27:46.657 Asynchronous Event Request (0Ch): Supported 00:27:46.657 Keep Alive (18h): Supported 00:27:46.657 I/O Commands 00:27:46.657 ------------ 00:27:46.657 Flush (00h): Supported LBA-Change 00:27:46.657 Write (01h): Supported LBA-Change 00:27:46.657 Read (02h): Supported 00:27:46.657 Compare (05h): Supported 00:27:46.657 Write Zeroes (08h): Supported LBA-Change 00:27:46.657 Dataset Management (09h): Supported LBA-Change 00:27:46.657 Copy (19h): Supported LBA-Change 00:27:46.657 00:27:46.657 Error Log 00:27:46.657 ========= 00:27:46.657 00:27:46.657 Arbitration 00:27:46.657 =========== 00:27:46.657 Arbitration Burst: 1 00:27:46.657 00:27:46.657 Power Management 00:27:46.657 ================ 00:27:46.657 Number of Power States: 1 00:27:46.657 Current Power State: Power State #0 00:27:46.657 Power State #0: 00:27:46.657 Max Power: 0.00 W 00:27:46.657 Non-Operational State: Operational 00:27:46.657 Entry Latency: Not Reported 00:27:46.657 Exit Latency: Not Reported 00:27:46.657 Relative Read Throughput: 0 00:27:46.657 Relative Read Latency: 0 00:27:46.657 Relative Write Throughput: 0 00:27:46.657 Relative Write Latency: 0 00:27:46.657 Idle Power: Not Reported 00:27:46.657 Active Power: Not Reported 00:27:46.657 Non-Operational Permissive Mode: Not Supported 00:27:46.657 00:27:46.657 Health Information 00:27:46.657 ================== 00:27:46.657 Critical Warnings: 00:27:46.657 Available Spare Space: OK 00:27:46.657 Temperature: OK 00:27:46.657 Device Reliability: OK 00:27:46.657 Read Only: No 00:27:46.657 Volatile Memory Backup: OK 00:27:46.657 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:46.657 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:46.657 Available Spare: 0% 00:27:46.657 Available Spare Threshold: 0% 00:27:46.657 Life Percentage Used:[2024-07-13 15:39:17.168784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.657 [2024-07-13 15:39:17.168797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5a0630) 00:27:46.657 [2024-07-13 15:39:17.168807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.657 [2024-07-13 15:39:17.168829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5efa00, cid 7, qid 0 00:27:46.657 [2024-07-13 15:39:17.169021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.657 [2024-07-13 15:39:17.169036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.657 [2024-07-13 15:39:17.169043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.657 [2024-07-13 15:39:17.169050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5efa00) on tqpair=0x5a0630 00:27:46.657 [2024-07-13 15:39:17.169096] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:46.657 [2024-07-13 15:39:17.169115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5eef80) on tqpair=0x5a0630 00:27:46.657 [2024-07-13 15:39:17.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.657 [2024-07-13 15:39:17.169134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef100) on tqpair=0x5a0630 00:27:46.657 [2024-07-13 15:39:17.169142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.657 [2024-07-13 15:39:17.169165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef280) on tqpair=0x5a0630 00:27:46.657 [2024-07-13 15:39:17.169173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.657 [2024-07-13 15:39:17.169181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef400) on tqpair=0x5a0630 00:27:46.658 [2024-07-13 15:39:17.169188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:46.658 [2024-07-13 15:39:17.169200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a0630) 00:27:46.658 [2024-07-13 15:39:17.169224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.658 [2024-07-13 15:39:17.169246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef400, cid 3, qid 0 00:27:46.658 [2024-07-13 15:39:17.169413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.658 [2024-07-13 15:39:17.169426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.658 [2024-07-13 15:39:17.169433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef400) on tqpair=0x5a0630 00:27:46.658 [2024-07-13 15:39:17.169451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a0630) 00:27:46.658 [2024-07-13 15:39:17.169475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.658 [2024-07-13 15:39:17.169501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef400, cid 3, qid 0 00:27:46.658 [2024-07-13 15:39:17.169641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.658 [2024-07-13 15:39:17.169656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.658 [2024-07-13 15:39:17.169662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef400) on tqpair=0x5a0630 00:27:46.658 [2024-07-13 15:39:17.169677] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:46.658 [2024-07-13 15:39:17.169685] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:46.658 [2024-07-13 15:39:17.169701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.169721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a0630) 00:27:46.658 [2024-07-13 15:39:17.169732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.658 [2024-07-13 15:39:17.169753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef400, cid 3, qid 0 00:27:46.658 [2024-07-13 15:39:17.173876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.658 [2024-07-13 15:39:17.173893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.658 [2024-07-13 15:39:17.173900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.173906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef400) on tqpair=0x5a0630 00:27:46.658 [2024-07-13 15:39:17.173924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.173949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.173956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5a0630) 00:27:46.658 [2024-07-13 15:39:17.173967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.658 [2024-07-13 15:39:17.173990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5ef400, cid 3, qid 0 00:27:46.658 [2024-07-13 15:39:17.174155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:46.658 [2024-07-13 15:39:17.174168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:46.658 [2024-07-13 15:39:17.174175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:46.658 [2024-07-13 15:39:17.174181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5ef400) on tqpair=0x5a0630 00:27:46.658 [2024-07-13 15:39:17.174194] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:46.658 0% 00:27:46.658 Data Units Read: 0 00:27:46.658 Data Units Written: 0 00:27:46.658 Host Read Commands: 0 00:27:46.658 Host Write Commands: 0 00:27:46.658 Controller Busy Time: 0 minutes 00:27:46.658 Power Cycles: 0 00:27:46.658 Power On Hours: 0 hours 00:27:46.658 Unsafe Shutdowns: 0 00:27:46.658 Unrecoverable Media Errors: 0 00:27:46.658 Lifetime Error Log Entries: 0 00:27:46.658 Warning Temperature Time: 0 minutes 00:27:46.658 Critical Temperature Time: 0 minutes 00:27:46.658 00:27:46.658 Number of Queues 00:27:46.658 ================ 00:27:46.658 Number of I/O Submission Queues: 127 00:27:46.658 Number of I/O Completion Queues: 127 00:27:46.658 00:27:46.658 Active Namespaces 00:27:46.658 ================= 00:27:46.658 Namespace ID:1 00:27:46.658 Error Recovery Timeout: Unlimited 00:27:46.658 Command Set Identifier: NVM (00h) 00:27:46.658 Deallocate: Supported 00:27:46.658 Deallocated/Unwritten Error: Not Supported 00:27:46.658 Deallocated Read Value: Unknown 00:27:46.658 Deallocate in Write Zeroes: Not Supported 00:27:46.658 Deallocated Guard Field: 0xFFFF 00:27:46.658 Flush: Supported 00:27:46.658 Reservation: Supported 00:27:46.658 Namespace Sharing Capabilities: Multiple Controllers 00:27:46.659 Size (in LBAs): 131072 (0GiB) 00:27:46.659 Capacity (in LBAs): 131072 (0GiB) 00:27:46.659 Utilization (in LBAs): 131072 (0GiB) 00:27:46.659 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:46.659 EUI64: ABCDEF0123456789 00:27:46.659 UUID: 11c88329-4266-4884-b422-983daf94bdb1 00:27:46.659 Thin Provisioning: Not Supported 00:27:46.659 Per-NS Atomic Units: Yes 00:27:46.659 Atomic Boundary Size (Normal): 0 00:27:46.659 Atomic Boundary Size (PFail): 0 00:27:46.659 Atomic Boundary Offset: 0 00:27:46.659 Maximum Single Source Range Length: 65535 00:27:46.659 Maximum Copy Length: 65535 00:27:46.659 Maximum Source Range Count: 1 00:27:46.659 NGUID/EUI64 Never Reused: No 00:27:46.659 Namespace Write Protected: No 00:27:46.659 Number of LBA Formats: 1 00:27:46.659 Current LBA Format: LBA Format #00 00:27:46.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:46.659 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.659 rmmod nvme_tcp 00:27:46.659 rmmod nvme_fabrics 00:27:46.659 rmmod nvme_keyring 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1199800 ']' 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1199800 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1199800 ']' 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1199800 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1199800 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1199800' 00:27:46.659 killing process with pid 1199800 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1199800 00:27:46.659 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1199800 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.920 15:39:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.823 15:39:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:48.823 00:27:48.823 real 0m5.256s 00:27:48.823 user 0m4.014s 00:27:48.823 sys 0m1.831s 00:27:48.823 15:39:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.823 15:39:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:48.823 ************************************ 00:27:48.823 END TEST nvmf_identify 00:27:48.823 ************************************ 00:27:49.082 15:39:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:49.082 15:39:19 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:49.082 15:39:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:49.082 15:39:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:49.082 15:39:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.082 ************************************ 00:27:49.082 START TEST nvmf_perf 00:27:49.082 ************************************ 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:49.082 * Looking for test storage... 00:27:49.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.082 15:39:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.984 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.985 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:27:50.985 00:27:50.985 --- 10.0.0.2 ping statistics --- 00:27:50.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.985 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:27:50.985 00:27:50.985 --- 10.0.0.1 ping statistics --- 00:27:50.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.985 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:50.985 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1201787 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1201787 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1201787 ']' 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.245 15:39:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.245 [2024-07-13 15:39:21.797993] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:27:51.245 [2024-07-13 15:39:21.798072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.245 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.245 [2024-07-13 15:39:21.836851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:51.245 [2024-07-13 15:39:21.863448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.245 [2024-07-13 15:39:21.953180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.245 [2024-07-13 15:39:21.953235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.245 [2024-07-13 15:39:21.953256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.245 [2024-07-13 15:39:21.953273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.245 [2024-07-13 15:39:21.953289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.245 [2024-07-13 15:39:21.953346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.245 [2024-07-13 15:39:21.953405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.245 [2024-07-13 15:39:21.953537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.245 [2024-07-13 15:39:21.953546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:51.504 15:39:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:54.788 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:54.788 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:54.788 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:27:54.788 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:55.046 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:55.046 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:27:55.046 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:55.046 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:55.046 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:55.304 [2024-07-13 15:39:25.937586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.304 15:39:25 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.562 15:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:55.562 15:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.820 15:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:55.820 15:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:56.079 15:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.336 [2024-07-13 15:39:26.937292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.336 15:39:26 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:56.594 15:39:27 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:27:56.594 15:39:27 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:56.594 15:39:27 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:56.594 15:39:27 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:27:57.975 Initializing NVMe Controllers 00:27:57.975 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:27:57.975 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:27:57.975 Initialization complete. Launching workers. 00:27:57.975 ======================================================== 00:27:57.975 Latency(us) 00:27:57.975 Device Information : IOPS MiB/s Average min max 00:27:57.975 PCIE (0000:88:00.0) NSID 1 from core 0: 85167.46 332.69 375.46 38.32 6255.40 00:27:57.975 ======================================================== 00:27:57.975 Total : 85167.46 332.69 375.46 38.32 6255.40 00:27:57.975 00:27:57.975 15:39:28 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:57.975 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.394 Initializing NVMe Controllers 00:27:59.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.394 Initialization complete. Launching workers. 00:27:59.394 ======================================================== 00:27:59.394 Latency(us) 00:27:59.394 Device Information : IOPS MiB/s Average min max 00:27:59.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.00 0.25 15386.54 216.77 45788.74 00:27:59.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15219.74 7951.02 47886.79 00:27:59.394 ======================================================== 00:27:59.394 Total : 131.00 0.51 15302.50 216.77 47886.79 00:27:59.394 00:27:59.394 15:39:29 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.394 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.328 Initializing NVMe Controllers 00:28:00.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:00.328 Initialization complete. Launching workers. 00:28:00.328 ======================================================== 00:28:00.328 Latency(us) 00:28:00.328 Device Information : IOPS MiB/s Average min max 00:28:00.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8477.00 33.11 3775.30 626.59 7585.47 00:28:00.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3829.00 14.96 8400.67 6863.19 16600.30 00:28:00.328 ======================================================== 00:28:00.328 Total : 12306.00 48.07 5214.48 626.59 16600.30 00:28:00.328 00:28:00.586 15:39:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:00.586 15:39:31 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:00.586 15:39:31 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.586 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.125 Initializing NVMe Controllers 00:28:03.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.125 Controller IO queue size 128, less than required. 00:28:03.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.125 Controller IO queue size 128, less than required. 00:28:03.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:03.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:03.125 Initialization complete. Launching workers. 00:28:03.125 ======================================================== 00:28:03.125 Latency(us) 00:28:03.125 Device Information : IOPS MiB/s Average min max 00:28:03.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1048.91 262.23 126227.38 78316.45 191498.45 00:28:03.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.45 142.36 229830.75 119545.37 357435.86 00:28:03.125 ======================================================== 00:28:03.125 Total : 1618.36 404.59 162682.20 78316.45 357435.86 00:28:03.125 00:28:03.125 15:39:33 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:03.125 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.125 No valid NVMe controllers or AIO or URING devices found 00:28:03.125 Initializing NVMe Controllers 00:28:03.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.125 Controller IO queue size 128, less than required. 00:28:03.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.125 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:03.125 Controller IO queue size 128, less than required. 00:28:03.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.125 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:03.125 WARNING: Some requested NVMe devices were skipped 00:28:03.125 15:39:33 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:03.125 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.409 Initializing NVMe Controllers 00:28:06.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.409 Controller IO queue size 128, less than required. 00:28:06.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.409 Controller IO queue size 128, less than required. 00:28:06.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.409 Initialization complete. Launching workers. 00:28:06.409 00:28:06.409 ==================== 00:28:06.409 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:06.409 TCP transport: 00:28:06.409 polls: 27853 00:28:06.409 idle_polls: 11080 00:28:06.409 sock_completions: 16773 00:28:06.409 nvme_completions: 4067 00:28:06.409 submitted_requests: 6112 00:28:06.409 queued_requests: 1 00:28:06.409 00:28:06.409 ==================== 00:28:06.409 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:06.409 TCP transport: 00:28:06.409 polls: 23921 00:28:06.409 idle_polls: 7978 00:28:06.409 sock_completions: 15943 00:28:06.409 nvme_completions: 4649 00:28:06.409 submitted_requests: 6990 00:28:06.409 queued_requests: 1 00:28:06.409 ======================================================== 00:28:06.409 Latency(us) 00:28:06.409 Device Information : IOPS MiB/s Average min max 00:28:06.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1016.45 254.11 131168.92 77151.38 192477.17 00:28:06.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1161.95 290.49 111801.52 48631.65 178232.62 00:28:06.410 ======================================================== 00:28:06.410 Total : 2178.40 544.60 120838.45 48631.65 192477.17 00:28:06.410 00:28:06.410 15:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:06.410 15:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.410 15:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:06.410 15:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:06.410 15:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=406b40fd-742b-49ec-8b85-f14d8acdffcd 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 406b40fd-742b-49ec-8b85-f14d8acdffcd 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=406b40fd-742b-49ec-8b85-f14d8acdffcd 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:09.695 15:39:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:09.695 { 00:28:09.695 "uuid": "406b40fd-742b-49ec-8b85-f14d8acdffcd", 00:28:09.695 "name": "lvs_0", 00:28:09.695 "base_bdev": "Nvme0n1", 00:28:09.695 "total_data_clusters": 238234, 00:28:09.695 "free_clusters": 238234, 00:28:09.695 "block_size": 512, 00:28:09.695 "cluster_size": 4194304 00:28:09.695 } 00:28:09.695 ]' 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="406b40fd-742b-49ec-8b85-f14d8acdffcd") .free_clusters' 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="406b40fd-742b-49ec-8b85-f14d8acdffcd") .cluster_size' 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:09.695 952936 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:09.695 15:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 406b40fd-742b-49ec-8b85-f14d8acdffcd lbd_0 20480 00:28:10.263 15:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=82747b00-d9b6-4eae-9ba6-0b406b0ba7ee 00:28:10.263 15:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 82747b00-d9b6-4eae-9ba6-0b406b0ba7ee lvs_n_0 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=6472ccf9-01c3-4693-b9ba-dc31159f9797 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 6472ccf9-01c3-4693-b9ba-dc31159f9797 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=6472ccf9-01c3-4693-b9ba-dc31159f9797 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:10.832 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:11.090 { 00:28:11.090 "uuid": "406b40fd-742b-49ec-8b85-f14d8acdffcd", 00:28:11.090 "name": "lvs_0", 00:28:11.090 "base_bdev": "Nvme0n1", 00:28:11.090 "total_data_clusters": 238234, 00:28:11.090 "free_clusters": 233114, 00:28:11.090 "block_size": 512, 00:28:11.090 "cluster_size": 4194304 00:28:11.090 }, 00:28:11.090 { 00:28:11.090 "uuid": "6472ccf9-01c3-4693-b9ba-dc31159f9797", 00:28:11.090 "name": "lvs_n_0", 00:28:11.090 "base_bdev": "82747b00-d9b6-4eae-9ba6-0b406b0ba7ee", 00:28:11.090 "total_data_clusters": 5114, 00:28:11.090 "free_clusters": 5114, 00:28:11.090 "block_size": 512, 00:28:11.090 "cluster_size": 4194304 00:28:11.090 } 00:28:11.090 ]' 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6472ccf9-01c3-4693-b9ba-dc31159f9797") .free_clusters' 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6472ccf9-01c3-4693-b9ba-dc31159f9797") .cluster_size' 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:11.090 20456 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:11.090 15:39:41 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6472ccf9-01c3-4693-b9ba-dc31159f9797 lbd_nest_0 20456 00:28:11.348 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=56c9cc87-b9d1-4766-81af-1ca148b4f73b 00:28:11.348 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.607 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:11.607 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 56c9cc87-b9d1-4766-81af-1ca148b4f73b 00:28:11.865 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.125 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:12.125 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:12.125 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:12.125 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:12.125 15:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.125 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.339 Initializing NVMe Controllers 00:28:24.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.339 Initialization complete. Launching workers. 00:28:24.339 ======================================================== 00:28:24.339 Latency(us) 00:28:24.339 Device Information : IOPS MiB/s Average min max 00:28:24.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.80 0.02 20527.43 222.33 46885.15 00:28:24.339 ======================================================== 00:28:24.339 Total : 48.80 0.02 20527.43 222.33 46885.15 00:28:24.339 00:28:24.339 15:39:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:24.339 15:39:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.339 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.356 Initializing NVMe Controllers 00:28:34.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:34.356 Initialization complete. Launching workers. 00:28:34.356 ======================================================== 00:28:34.356 Latency(us) 00:28:34.356 Device Information : IOPS MiB/s Average min max 00:28:34.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.80 10.35 12093.00 3984.76 50889.20 00:28:34.356 ======================================================== 00:28:34.356 Total : 82.80 10.35 12093.00 3984.76 50889.20 00:28:34.356 00:28:34.356 15:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:34.356 15:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:34.356 15:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:34.356 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.338 Initializing NVMe Controllers 00:28:44.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.338 Initialization complete. Launching workers. 00:28:44.338 ======================================================== 00:28:44.338 Latency(us) 00:28:44.338 Device Information : IOPS MiB/s Average min max 00:28:44.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7116.17 3.47 4506.03 309.28 47822.08 00:28:44.338 ======================================================== 00:28:44.338 Total : 7116.17 3.47 4506.03 309.28 47822.08 00:28:44.338 00:28:44.338 15:40:13 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:44.338 15:40:13 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:44.338 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.313 Initializing NVMe Controllers 00:28:54.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.313 Initialization complete. Launching workers. 00:28:54.313 ======================================================== 00:28:54.313 Latency(us) 00:28:54.313 Device Information : IOPS MiB/s Average min max 00:28:54.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2025.90 253.24 15816.98 992.35 33980.15 00:28:54.313 ======================================================== 00:28:54.313 Total : 2025.90 253.24 15816.98 992.35 33980.15 00:28:54.313 00:28:54.313 15:40:24 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:54.313 15:40:24 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:54.313 15:40:24 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.313 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.293 Initializing NVMe Controllers 00:29:04.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.293 Controller IO queue size 128, less than required. 00:29:04.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.293 Initialization complete. Launching workers. 00:29:04.293 ======================================================== 00:29:04.293 Latency(us) 00:29:04.293 Device Information : IOPS MiB/s Average min max 00:29:04.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11805.50 5.76 10847.37 1885.65 27687.13 00:29:04.293 ======================================================== 00:29:04.293 Total : 11805.50 5.76 10847.37 1885.65 27687.13 00:29:04.293 00:29:04.293 15:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:04.293 15:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.293 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.522 Initializing NVMe Controllers 00:29:16.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.522 Controller IO queue size 128, less than required. 00:29:16.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.522 Initialization complete. Launching workers. 00:29:16.522 ======================================================== 00:29:16.522 Latency(us) 00:29:16.522 Device Information : IOPS MiB/s Average min max 00:29:16.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.13 150.02 106893.31 31004.10 232335.97 00:29:16.522 ======================================================== 00:29:16.522 Total : 1200.13 150.02 106893.31 31004.10 232335.97 00:29:16.522 00:29:16.522 15:40:45 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.522 15:40:45 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56c9cc87-b9d1-4766-81af-1ca148b4f73b 00:29:16.522 15:40:46 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:16.522 15:40:46 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82747b00-d9b6-4eae-9ba6-0b406b0ba7ee 00:29:16.522 15:40:46 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:16.522 rmmod nvme_tcp 00:29:16.522 rmmod nvme_fabrics 00:29:16.522 rmmod nvme_keyring 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1201787 ']' 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1201787 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1201787 ']' 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1201787 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1201787 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1201787' 00:29:16.522 killing process with pid 1201787 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1201787 00:29:16.522 15:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1201787 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.448 15:40:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.358 15:40:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:20.358 00:29:20.358 real 1m31.124s 00:29:20.358 user 5m31.362s 00:29:20.358 sys 0m17.901s 00:29:20.358 15:40:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.358 15:40:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:20.358 ************************************ 00:29:20.358 END TEST nvmf_perf 00:29:20.358 ************************************ 00:29:20.358 15:40:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:20.358 15:40:50 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:20.358 15:40:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:20.358 15:40:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.358 15:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.358 ************************************ 00:29:20.358 START TEST nvmf_fio_host 00:29:20.358 ************************************ 00:29:20.358 15:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:20.359 * Looking for test storage... 00:29:20.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:20.359 15:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:22.279 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:22.279 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:22.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:22.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:22.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:29:22.279 00:29:22.279 --- 10.0.0.2 ping statistics --- 00:29:22.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.279 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:22.279 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:29:22.279 00:29:22.279 --- 10.0.0.1 ping statistics --- 00:29:22.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.280 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1213769 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1213769 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1213769 ']' 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.280 15:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.280 [2024-07-13 15:40:52.994937] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:22.280 [2024-07-13 15:40:52.995013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.280 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.280 [2024-07-13 15:40:53.034276] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:22.538 [2024-07-13 15:40:53.068044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.538 [2024-07-13 15:40:53.164659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.538 [2024-07-13 15:40:53.164717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.538 [2024-07-13 15:40:53.164733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.538 [2024-07-13 15:40:53.164747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.538 [2024-07-13 15:40:53.164758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.538 [2024-07-13 15:40:53.164820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.538 [2024-07-13 15:40:53.164908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.538 [2024-07-13 15:40:53.168886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.538 [2024-07-13 15:40:53.168897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.538 15:40:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.538 15:40:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:22.538 15:40:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:23.106 [2024-07-13 15:40:53.574579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.106 15:40:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:23.106 15:40:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:23.106 15:40:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.106 15:40:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:23.106 Malloc1 00:29:23.364 15:40:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.621 15:40:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:23.878 15:40:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.135 [2024-07-13 15:40:54.675293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.136 15:40:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.393 15:40:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:24.393 15:40:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:24.394 15:40:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.394 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:24.394 fio-3.35 00:29:24.394 Starting 1 thread 00:29:24.654 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.182 00:29:27.182 test: (groupid=0, jobs=1): err= 0: pid=1214225: Sat Jul 13 15:40:57 2024 00:29:27.182 read: IOPS=8913, BW=34.8MiB/s (36.5MB/s)(69.9MiB/2007msec) 00:29:27.182 slat (nsec): min=2000, max=150646, avg=2597.78, stdev=1772.77 00:29:27.183 clat (usec): min=3242, max=13947, avg=7910.42, stdev=580.81 00:29:27.183 lat (usec): min=3270, max=13949, avg=7913.01, stdev=580.68 00:29:27.183 clat percentiles (usec): 00:29:27.183 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7439], 00:29:27.183 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:29:27.183 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:29:27.183 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11076], 99.95th=[12256], 00:29:27.183 | 99.99th=[13304] 00:29:27.183 bw ( KiB/s): min=34520, max=36184, per=99.96%, avg=35642.00, stdev=760.47, samples=4 00:29:27.183 iops : min= 8630, max= 9046, avg=8910.50, stdev=190.12, samples=4 00:29:27.183 write: IOPS=8927, BW=34.9MiB/s (36.6MB/s)(70.0MiB/2007msec); 0 zone resets 00:29:27.183 slat (usec): min=2, max=145, avg= 2.74, stdev= 1.55 00:29:27.183 clat (usec): min=1437, max=12253, avg=6342.78, stdev=518.08 00:29:27.183 lat (usec): min=1446, max=12255, avg=6345.52, stdev=518.04 00:29:27.183 clat percentiles (usec): 00:29:27.183 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:29:27.183 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:29:27.183 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:29:27.183 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[10290], 99.95th=[10945], 00:29:27.183 | 99.99th=[12256] 00:29:27.183 bw ( KiB/s): min=35344, max=36096, per=100.00%, avg=35718.00, stdev=369.58, samples=4 00:29:27.183 iops : min= 8836, max= 9024, avg=8929.50, stdev=92.40, samples=4 00:29:27.183 lat (msec) : 2=0.01%, 4=0.08%, 10=99.76%, 20=0.15% 00:29:27.183 cpu : usr=55.28%, sys=38.33%, ctx=68, majf=0, minf=41 00:29:27.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:27.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:27.183 issued rwts: total=17890,17917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:27.183 00:29:27.183 Run status group 0 (all jobs): 00:29:27.183 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2007-2007msec 00:29:27.183 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.0MiB (73.4MB), run=2007-2007msec 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:27.183 15:40:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:27.183 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:27.183 fio-3.35 00:29:27.183 Starting 1 thread 00:29:27.183 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.709 00:29:29.709 test: (groupid=0, jobs=1): err= 0: pid=1214558: Sat Jul 13 15:41:00 2024 00:29:29.709 read: IOPS=6541, BW=102MiB/s (107MB/s)(206MiB/2013msec) 00:29:29.709 slat (nsec): min=2974, max=94827, avg=3682.38, stdev=1777.44 00:29:29.709 clat (usec): min=1938, max=25073, avg=11183.59, stdev=2899.11 00:29:29.709 lat (usec): min=1941, max=25077, avg=11187.27, stdev=2899.15 00:29:29.709 clat percentiles (usec): 00:29:29.709 | 1.00th=[ 4948], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8848], 00:29:29.709 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:29:29.709 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14746], 95.00th=[16057], 00:29:29.709 | 99.00th=[19792], 99.50th=[20841], 99.90th=[21890], 99.95th=[22676], 00:29:29.709 | 99.99th=[24511] 00:29:29.709 bw ( KiB/s): min=43040, max=63264, per=51.10%, avg=53480.00, stdev=8541.56, samples=4 00:29:29.709 iops : min= 2690, max= 3954, avg=3342.50, stdev=533.85, samples=4 00:29:29.709 write: IOPS=3723, BW=58.2MiB/s (61.0MB/s)(110MiB/1884msec); 0 zone resets 00:29:29.709 slat (usec): min=30, max=134, avg=33.90, stdev= 5.21 00:29:29.709 clat (usec): min=3275, max=31702, avg=14993.38, stdev=4050.08 00:29:29.709 lat (usec): min=3307, max=31734, avg=15027.28, stdev=4050.36 00:29:29.709 clat percentiles (usec): 00:29:29.709 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10814], 00:29:29.709 | 30.00th=[12387], 40.00th=[13960], 50.00th=[15270], 60.00th=[16319], 00:29:29.709 | 70.00th=[17433], 80.00th=[18744], 90.00th=[20317], 95.00th=[21103], 00:29:29.709 | 99.00th=[24249], 99.50th=[26084], 99.90th=[30802], 99.95th=[31589], 00:29:29.709 | 99.99th=[31589] 00:29:29.709 bw ( KiB/s): min=45120, max=66560, per=93.17%, avg=55512.00, stdev=8911.92, samples=4 00:29:29.709 iops : min= 2820, max= 4160, avg=3469.50, stdev=557.00, samples=4 00:29:29.709 lat (msec) : 2=0.01%, 4=0.22%, 10=26.33%, 20=69.10%, 50=4.34% 00:29:29.709 cpu : usr=67.05%, sys=27.58%, ctx=51, majf=0, minf=59 00:29:29.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:29:29.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.709 issued rwts: total=13168,7016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.709 00:29:29.709 Run status group 0 (all jobs): 00:29:29.709 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=206MiB (216MB), run=2013-2013msec 00:29:29.709 WRITE: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=110MiB (115MB), run=1884-1884msec 00:29:29.709 15:41:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:29.967 15:41:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:33.253 Nvme0n1 00:29:33.253 15:41:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=54f60945-d405-4336-8f70-705e6a7320fb 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 54f60945-d405-4336-8f70-705e6a7320fb 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=54f60945-d405-4336-8f70-705e6a7320fb 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:36.539 { 00:29:36.539 "uuid": "54f60945-d405-4336-8f70-705e6a7320fb", 00:29:36.539 "name": "lvs_0", 00:29:36.539 "base_bdev": "Nvme0n1", 00:29:36.539 "total_data_clusters": 930, 00:29:36.539 "free_clusters": 930, 00:29:36.539 "block_size": 512, 00:29:36.539 "cluster_size": 1073741824 00:29:36.539 } 00:29:36.539 ]' 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="54f60945-d405-4336-8f70-705e6a7320fb") .free_clusters' 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="54f60945-d405-4336-8f70-705e6a7320fb") .cluster_size' 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:36.539 952320 00:29:36.539 15:41:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:36.539 da8fe512-5ff8-4633-b4f3-d3e4cf4bf7f0 00:29:36.539 15:41:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:36.796 15:41:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:37.060 15:41:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:37.366 15:41:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.624 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:37.624 fio-3.35 00:29:37.624 Starting 1 thread 00:29:37.624 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.155 00:29:40.155 test: (groupid=0, jobs=1): err= 0: pid=1215842: Sat Jul 13 15:41:10 2024 00:29:40.155 read: IOPS=6117, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2008msec) 00:29:40.155 slat (usec): min=2, max=144, avg= 2.70, stdev= 1.94 00:29:40.155 clat (usec): min=971, max=171443, avg=11538.39, stdev=11575.21 00:29:40.155 lat (usec): min=973, max=171477, avg=11541.09, stdev=11575.44 00:29:40.155 clat percentiles (msec): 00:29:40.155 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:40.155 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:40.155 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:29:40.155 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:40.155 | 99.99th=[ 171] 00:29:40.155 bw ( KiB/s): min=17048, max=27040, per=99.85%, avg=24432.00, stdev=4926.58, samples=4 00:29:40.155 iops : min= 4262, max= 6760, avg=6108.00, stdev=1231.64, samples=4 00:29:40.155 write: IOPS=6096, BW=23.8MiB/s (25.0MB/s)(47.8MiB/2008msec); 0 zone resets 00:29:40.155 slat (usec): min=2, max=101, avg= 2.84, stdev= 1.49 00:29:40.155 clat (usec): min=286, max=169550, avg=9267.89, stdev=10874.00 00:29:40.155 lat (usec): min=289, max=169555, avg=9270.72, stdev=10874.21 00:29:40.155 clat percentiles (msec): 00:29:40.155 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:29:40.155 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:40.155 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:40.155 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:29:40.155 | 99.99th=[ 169] 00:29:40.155 bw ( KiB/s): min=18088, max=26624, per=99.91%, avg=24362.00, stdev=4184.53, samples=4 00:29:40.155 iops : min= 4522, max= 6656, avg=6090.50, stdev=1046.13, samples=4 00:29:40.155 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:40.155 lat (msec) : 2=0.03%, 4=0.12%, 10=58.58%, 20=40.73%, 250=0.52% 00:29:40.155 cpu : usr=56.10%, sys=39.66%, ctx=61, majf=0, minf=41 00:29:40.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:40.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:40.155 issued rwts: total=12283,12241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:40.155 00:29:40.155 Run status group 0 (all jobs): 00:29:40.155 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.3MB), run=2008-2008msec 00:29:40.155 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.8MiB (50.1MB), run=2008-2008msec 00:29:40.155 15:41:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:40.155 15:41:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=bf902f6f-e0de-432b-923d-8b9afe540e94 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb bf902f6f-e0de-432b-923d-8b9afe540e94 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=bf902f6f-e0de-432b-923d-8b9afe540e94 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:41.529 15:41:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:41.529 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:41.529 { 00:29:41.529 "uuid": "54f60945-d405-4336-8f70-705e6a7320fb", 00:29:41.529 "name": "lvs_0", 00:29:41.529 "base_bdev": "Nvme0n1", 00:29:41.529 "total_data_clusters": 930, 00:29:41.529 "free_clusters": 0, 00:29:41.529 "block_size": 512, 00:29:41.529 "cluster_size": 1073741824 00:29:41.529 }, 00:29:41.529 { 00:29:41.529 "uuid": "bf902f6f-e0de-432b-923d-8b9afe540e94", 00:29:41.529 "name": "lvs_n_0", 00:29:41.529 "base_bdev": "da8fe512-5ff8-4633-b4f3-d3e4cf4bf7f0", 00:29:41.529 "total_data_clusters": 237847, 00:29:41.529 "free_clusters": 237847, 00:29:41.529 "block_size": 512, 00:29:41.529 "cluster_size": 4194304 00:29:41.529 } 00:29:41.529 ]' 00:29:41.529 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bf902f6f-e0de-432b-923d-8b9afe540e94") .free_clusters' 00:29:41.529 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:41.529 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="bf902f6f-e0de-432b-923d-8b9afe540e94") .cluster_size' 00:29:41.786 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:41.786 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:41.786 15:41:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:41.786 951388 00:29:41.786 15:41:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:42.352 81cd298c-32aa-4655-8f71-19a119f93eb4 00:29:42.352 15:41:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:42.610 15:41:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:42.868 15:41:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:43.126 15:41:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:43.384 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:43.384 fio-3.35 00:29:43.384 Starting 1 thread 00:29:43.384 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.912 00:29:45.912 test: (groupid=0, jobs=1): err= 0: pid=1216576: Sat Jul 13 15:41:16 2024 00:29:45.912 read: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(46.1MiB/2009msec) 00:29:45.912 slat (nsec): min=1885, max=158419, avg=2598.39, stdev=2261.86 00:29:45.912 clat (usec): min=4300, max=20212, avg=12051.86, stdev=1000.27 00:29:45.912 lat (usec): min=4311, max=20214, avg=12054.45, stdev=1000.14 00:29:45.912 clat percentiles (usec): 00:29:45.912 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:29:45.912 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:29:45.912 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13566], 00:29:45.912 | 99.00th=[14222], 99.50th=[14484], 99.90th=[17957], 99.95th=[19268], 00:29:45.912 | 99.99th=[20055] 00:29:45.912 bw ( KiB/s): min=22331, max=23952, per=99.86%, avg=23484.75, stdev=775.69, samples=4 00:29:45.912 iops : min= 5582, max= 5988, avg=5871.00, stdev=194.30, samples=4 00:29:45.912 write: IOPS=5874, BW=22.9MiB/s (24.1MB/s)(46.1MiB/2009msec); 0 zone resets 00:29:45.912 slat (usec): min=2, max=120, avg= 2.74, stdev= 1.83 00:29:45.912 clat (usec): min=2159, max=18158, avg=9606.02, stdev=911.80 00:29:45.912 lat (usec): min=2168, max=18160, avg=9608.75, stdev=911.75 00:29:45.912 clat percentiles (usec): 00:29:45.912 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:45.912 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:29:45.912 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:29:45.912 | 99.00th=[11600], 99.50th=[11863], 99.90th=[17695], 99.95th=[17957], 00:29:45.912 | 99.99th=[18220] 00:29:45.912 bw ( KiB/s): min=23257, max=23616, per=99.85%, avg=23462.25, stdev=149.55, samples=4 00:29:45.912 iops : min= 5814, max= 5904, avg=5865.50, stdev=37.50, samples=4 00:29:45.912 lat (msec) : 4=0.04%, 10=35.47%, 20=64.48%, 50=0.01% 00:29:45.912 cpu : usr=56.23%, sys=39.29%, ctx=53, majf=0, minf=41 00:29:45.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:45.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:45.912 issued rwts: total=11811,11801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:45.912 00:29:45.912 Run status group 0 (all jobs): 00:29:45.912 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.4MB), run=2009-2009msec 00:29:45.912 WRITE: bw=22.9MiB/s (24.1MB/s), 22.9MiB/s-22.9MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.3MB), run=2009-2009msec 00:29:45.912 15:41:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:45.912 15:41:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:45.912 15:41:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:50.123 15:41:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:50.123 15:41:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:53.413 15:41:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:53.413 15:41:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:55.317 15:41:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:55.317 15:41:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:55.317 15:41:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:55.317 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:55.318 rmmod nvme_tcp 00:29:55.318 rmmod nvme_fabrics 00:29:55.318 rmmod nvme_keyring 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1213769 ']' 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1213769 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1213769 ']' 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1213769 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1213769 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1213769' 00:29:55.318 killing process with pid 1213769 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1213769 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1213769 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.318 15:41:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.318 15:41:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.318 15:41:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.852 15:41:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.852 00:29:57.852 real 0m37.251s 00:29:57.852 user 2m21.878s 00:29:57.852 sys 0m7.504s 00:29:57.852 15:41:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.852 15:41:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.852 ************************************ 00:29:57.852 END TEST nvmf_fio_host 00:29:57.852 ************************************ 00:29:57.852 15:41:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:57.852 15:41:28 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:57.852 15:41:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:57.852 15:41:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.852 15:41:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.852 ************************************ 00:29:57.852 START TEST nvmf_failover 00:29:57.852 ************************************ 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:57.852 * Looking for test storage... 00:29:57.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.852 15:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.781 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:29:59.781 00:29:59.781 --- 10.0.0.2 ping statistics --- 00:29:59.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.782 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:29:59.782 00:29:59.782 --- 10.0.0.1 ping statistics --- 00:29:59.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.782 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1219936 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1219936 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1219936 ']' 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:59.782 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:59.782 [2024-07-13 15:41:30.329640] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:29:59.782 [2024-07-13 15:41:30.329724] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.782 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.782 [2024-07-13 15:41:30.373808] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:59.782 [2024-07-13 15:41:30.404621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:59.782 [2024-07-13 15:41:30.494621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.782 [2024-07-13 15:41:30.494683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.782 [2024-07-13 15:41:30.494700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.782 [2024-07-13 15:41:30.494713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.782 [2024-07-13 15:41:30.494726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.782 [2024-07-13 15:41:30.494809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.782 [2024-07-13 15:41:30.494884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.782 [2024-07-13 15:41:30.494887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.040 15:41:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.296 [2024-07-13 15:41:30.878654] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.296 15:41:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:00.554 Malloc0 00:30:00.554 15:41:31 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.812 15:41:31 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.069 15:41:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.326 [2024-07-13 15:41:31.992698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.326 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:01.584 [2024-07-13 15:41:32.281517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:01.584 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:01.842 [2024-07-13 15:41:32.550410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1220225 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1220225 /var/tmp/bdevperf.sock 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1220225 ']' 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.842 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:02.100 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.100 15:41:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:02.357 15:41:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.614 NVMe0n1 00:30:02.614 15:41:33 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.872 00:30:02.872 15:41:33 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1220357 00:30:02.872 15:41:33 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:02.872 15:41:33 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:04.248 15:41:34 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.248 [2024-07-13 15:41:34.834885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.834994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 [2024-07-13 15:41:34.835382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c35b0 is same with the state(5) to be set 00:30:04.248 15:41:34 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:07.533 15:41:37 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.533 00:30:07.533 15:41:38 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:07.792 15:41:38 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:11.072 15:41:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.072 [2024-07-13 15:41:41.693791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.072 15:41:41 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:12.008 15:41:42 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:12.268 [2024-07-13 15:41:42.951891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.951981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.951997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.268 [2024-07-13 15:41:42.952454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 [2024-07-13 15:41:42.952845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5050 is same with the state(5) to be set 00:30:12.269 15:41:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1220357 00:30:18.841 0 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1220225 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1220225 ']' 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1220225 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220225 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220225' 00:30:18.842 killing process with pid 1220225 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1220225 00:30:18.842 15:41:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1220225 00:30:18.842 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:18.842 [2024-07-13 15:41:32.615053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:18.842 [2024-07-13 15:41:32.615134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220225 ] 00:30:18.842 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.842 [2024-07-13 15:41:32.647048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:18.842 [2024-07-13 15:41:32.675741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.842 [2024-07-13 15:41:32.762264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.842 Running I/O for 15 seconds... 00:30:18.842 [2024-07-13 15:41:34.836004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.842 [2024-07-13 15:41:34.836759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.836985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.836998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.837013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.837026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.837041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.837054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.842 [2024-07-13 15:41:34.837069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.842 [2024-07-13 15:41:34.837082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.843 [2024-07-13 15:41:34.837820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.837984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.837999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.843 [2024-07-13 15:41:34.838323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.843 [2024-07-13 15:41:34.838338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.838973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.838987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.844 [2024-07-13 15:41:34.839225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.844 [2024-07-13 15:41:34.839316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.844 [2024-07-13 15:41:34.839366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81184 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.844 [2024-07-13 15:41:34.839414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81192 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.844 [2024-07-13 15:41:34.839459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81200 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.844 [2024-07-13 15:41:34.839511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81208 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.844 [2024-07-13 15:41:34.839557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.844 [2024-07-13 15:41:34.839567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81216 len:8 PRP1 0x0 PRP2 0x0 00:30:18.844 [2024-07-13 15:41:34.839579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.844 [2024-07-13 15:41:34.839592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81224 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81232 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81240 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81256 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81264 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81272 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.839955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81280 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.839967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.839979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.839990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.840000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.840013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.840035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.840046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81296 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.840058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.840081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.840093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.840109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.840133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.840143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.840162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.845 [2024-07-13 15:41:34.840184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.845 [2024-07-13 15:41:34.840195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:30:18.845 [2024-07-13 15:41:34.840208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840265] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19e0f10 was disconnected and freed. reset controller. 00:30:18.845 [2024-07-13 15:41:34.840284] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:18.845 [2024-07-13 15:41:34.840318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.845 [2024-07-13 15:41:34.840341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.845 [2024-07-13 15:41:34.840370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.845 [2024-07-13 15:41:34.840396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.845 [2024-07-13 15:41:34.840422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:34.840435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.845 [2024-07-13 15:41:34.840484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ba850 (9): Bad file descriptor 00:30:18.845 [2024-07-13 15:41:34.843737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.845 [2024-07-13 15:41:34.974163] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:18.845 [2024-07-13 15:41:38.430472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.430976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.430989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.431005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.431018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.431051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.431066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.845 [2024-07-13 15:41:38.431095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.845 [2024-07-13 15:41:38.431109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.846 [2024-07-13 15:41:38.431564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.431973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.431988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.432001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.432016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.846 [2024-07-13 15:41:38.432029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.846 [2024-07-13 15:41:38.432044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.847 [2024-07-13 15:41:38.432905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.432976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.432989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.847 [2024-07-13 15:41:38.433351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.847 [2024-07-13 15:41:38.433366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:38.433580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.433973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.433987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:38.434237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85680 is same with the state(5) to be set 00:30:18.848 [2024-07-13 15:41:38.434268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.848 [2024-07-13 15:41:38.434280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.848 [2024-07-13 15:41:38.434292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95144 len:8 PRP1 0x0 PRP2 0x0 00:30:18.848 [2024-07-13 15:41:38.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434377] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b85680 was disconnected and freed. reset controller. 00:30:18.848 [2024-07-13 15:41:38.434395] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:18.848 [2024-07-13 15:41:38.434429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.848 [2024-07-13 15:41:38.434447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.848 [2024-07-13 15:41:38.434475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.848 [2024-07-13 15:41:38.434501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.848 [2024-07-13 15:41:38.434528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:38.434541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.848 [2024-07-13 15:41:38.434582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ba850 (9): Bad file descriptor 00:30:18.848 [2024-07-13 15:41:38.437824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.848 [2024-07-13 15:41:38.521756] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:18.848 [2024-07-13 15:41:42.954830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.848 [2024-07-13 15:41:42.954896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:42.954929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:42.954945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:42.954962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:42.954976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:42.954991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:42.955005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:42.955020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:42.955033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:42.955048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:42.955068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.848 [2024-07-13 15:41:42.955083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.848 [2024-07-13 15:41:42.955097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.955976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.849 [2024-07-13 15:41:42.956404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.849 [2024-07-13 15:41:42.956418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.850 [2024-07-13 15:41:42.956460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.850 [2024-07-13 15:41:42.956488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.850 [2024-07-13 15:41:42.956803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.850 [2024-07-13 15:41:42.956817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.956831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.956845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.956862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.956884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.956898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.956913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.956941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.956968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.956982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.956997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.957010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.851 [2024-07-13 15:41:42.957038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33880 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33888 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33896 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33904 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33912 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34512 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34520 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34528 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34544 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34552 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.957954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.957967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.957983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.957994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34568 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34576 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34592 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34600 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34608 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34616 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.851 [2024-07-13 15:41:42.958298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.851 [2024-07-13 15:41:42.958309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.851 [2024-07-13 15:41:42.958319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34624 len:8 PRP1 0x0 PRP2 0x0 00:30:18.851 [2024-07-13 15:41:42.958331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34632 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34648 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34656 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34664 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34672 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34680 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34688 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34696 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34712 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34720 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.958963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.958973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.958985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.958998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34752 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34760 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34768 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34776 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34784 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34792 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34800 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34808 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34824 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34832 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.852 [2024-07-13 15:41:42.959538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.852 [2024-07-13 15:41:42.959547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.852 [2024-07-13 15:41:42.959558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34840 len:8 PRP1 0x0 PRP2 0x0 00:30:18.852 [2024-07-13 15:41:42.959570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34848 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34856 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34864 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33936 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33944 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33952 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.959957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.959968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.959978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33960 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.959990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33968 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33976 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33984 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33992 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34000 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34008 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34016 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34024 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34032 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34040 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34048 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34056 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34064 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34072 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34080 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34088 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34096 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.853 [2024-07-13 15:41:42.960847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.853 [2024-07-13 15:41:42.960858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34104 len:8 PRP1 0x0 PRP2 0x0 00:30:18.853 [2024-07-13 15:41:42.960877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.853 [2024-07-13 15:41:42.960891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.960902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.960913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34112 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.960926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.960938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.960948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.960959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34120 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.960977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.960990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.961001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.961012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34128 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.961024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.961037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.961047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.961058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34136 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.961083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.961097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.961108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34144 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.961121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.961133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.961144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.961154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34152 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.961167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.961179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.961189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.961200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34160 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.961212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.961225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34168 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34176 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34184 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34192 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34200 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34208 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.967962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.967972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.967982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.967995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.968007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.968017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.968028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.968046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.968059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.968070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.968081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.968093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.968105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.968115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.968126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34280 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.968137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.968150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.854 [2024-07-13 15:41:42.968160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.854 [2024-07-13 15:41:42.968171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34288 len:8 PRP1 0x0 PRP2 0x0 00:30:18.854 [2024-07-13 15:41:42.968183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.854 [2024-07-13 15:41:42.968195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34360 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34368 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34376 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34384 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34408 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.968975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.968986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.968996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34416 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34456 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34472 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34480 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.855 [2024-07-13 15:41:42.969395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.855 [2024-07-13 15:41:42.969405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:30:18.855 [2024-07-13 15:41:42.969417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.855 [2024-07-13 15:41:42.969430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.856 [2024-07-13 15:41:42.969440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.856 [2024-07-13 15:41:42.969454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:30:18.856 [2024-07-13 15:41:42.969466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.856 [2024-07-13 15:41:42.969479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:18.856 [2024-07-13 15:41:42.969490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:18.856 [2024-07-13 15:41:42.969500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:30:18.856 [2024-07-13 15:41:42.969513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.856 [2024-07-13 15:41:42.969583] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b85470 was disconnected and freed. reset controller. 00:30:18.856 [2024-07-13 15:41:42.969601] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:18.856 [2024-07-13 15:41:42.969642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.856 [2024-07-13 15:41:42.969660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.856 [2024-07-13 15:41:42.969676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.856 [2024-07-13 15:41:42.969689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.856 [2024-07-13 15:41:42.969702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.856 [2024-07-13 15:41:42.969714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.856 [2024-07-13 15:41:42.969727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.856 [2024-07-13 15:41:42.969740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.856 [2024-07-13 15:41:42.969753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:18.856 [2024-07-13 15:41:42.969797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ba850 (9): Bad file descriptor 00:30:18.856 [2024-07-13 15:41:42.973048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:18.856 [2024-07-13 15:41:43.140988] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:18.856 00:30:18.856 Latency(us) 00:30:18.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.856 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:18.856 Verification LBA range: start 0x0 length 0x4000 00:30:18.856 NVMe0n1 : 15.00 8617.32 33.66 764.90 0.00 13614.12 952.70 23592.96 00:30:18.856 =================================================================================================================== 00:30:18.856 Total : 8617.32 33.66 764.90 0.00 13614.12 952.70 23592.96 00:30:18.856 Received shutdown signal, test time was about 15.000000 seconds 00:30:18.856 00:30:18.856 Latency(us) 00:30:18.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.856 =================================================================================================================== 00:30:18.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1222074 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1222074 /var/tmp/bdevperf.sock 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1222074 ']' 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:18.856 [2024-07-13 15:41:49.548591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:18.856 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:19.127 [2024-07-13 15:41:49.797364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:19.127 15:41:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.693 NVMe0n1 00:30:19.693 15:41:50 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.951 00:30:19.952 15:41:50 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.520 00:30:20.520 15:41:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:20.520 15:41:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:20.779 15:41:51 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:21.037 15:41:51 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:24.322 15:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:24.322 15:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:24.322 15:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1222767 00:30:24.322 15:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:24.322 15:41:54 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1222767 00:30:25.257 0 00:30:25.515 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:25.515 [2024-07-13 15:41:49.067207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:25.515 [2024-07-13 15:41:49.067290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222074 ] 00:30:25.515 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.515 [2024-07-13 15:41:49.098619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:25.515 [2024-07-13 15:41:49.127051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.515 [2024-07-13 15:41:49.210368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.515 [2024-07-13 15:41:51.639246] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:25.515 [2024-07-13 15:41:51.639332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-07-13 15:41:51.639354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-07-13 15:41:51.639386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-07-13 15:41:51.639400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-07-13 15:41:51.639415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-07-13 15:41:51.639428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-07-13 15:41:51.639442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.515 [2024-07-13 15:41:51.639455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.515 [2024-07-13 15:41:51.639469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.515 [2024-07-13 15:41:51.639513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.515 [2024-07-13 15:41:51.639545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1272850 (9): Bad file descriptor 00:30:25.515 [2024-07-13 15:41:51.732075] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:25.515 Running I/O for 1 seconds... 00:30:25.515 00:30:25.515 Latency(us) 00:30:25.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.515 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:25.515 Verification LBA range: start 0x0 length 0x4000 00:30:25.515 NVMe0n1 : 1.01 8871.13 34.65 0.00 0.00 14370.87 1856.85 15922.82 00:30:25.515 =================================================================================================================== 00:30:25.515 Total : 8871.13 34.65 0.00 0.00 14370.87 1856.85 15922.82 00:30:25.515 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:25.515 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:25.772 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:25.772 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:25.772 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:26.030 15:41:56 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:26.289 15:41:57 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1222074 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1222074 ']' 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1222074 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222074 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222074' 00:30:29.572 killing process with pid 1222074 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1222074 00:30:29.572 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1222074 00:30:29.831 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:29.831 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.089 rmmod nvme_tcp 00:30:30.089 rmmod nvme_fabrics 00:30:30.089 rmmod nvme_keyring 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1219936 ']' 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1219936 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1219936 ']' 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1219936 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:30.089 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1219936 00:30:30.348 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:30.348 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:30.348 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1219936' 00:30:30.348 killing process with pid 1219936 00:30:30.348 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1219936 00:30:30.348 15:42:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1219936 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.348 15:42:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.889 15:42:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:32.889 00:30:32.889 real 0m35.056s 00:30:32.889 user 2m1.317s 00:30:32.889 sys 0m6.699s 00:30:32.889 15:42:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.889 15:42:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:32.889 ************************************ 00:30:32.889 END TEST nvmf_failover 00:30:32.889 ************************************ 00:30:32.889 15:42:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:32.889 15:42:03 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:32.889 15:42:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:32.889 15:42:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.889 15:42:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.889 ************************************ 00:30:32.889 START TEST nvmf_host_discovery 00:30:32.889 ************************************ 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:32.889 * Looking for test storage... 00:30:32.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.889 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:32.890 15:42:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:34.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:34.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:34.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:34.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.798 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:34.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:30:34.799 00:30:34.799 --- 10.0.0.2 ping statistics --- 00:30:34.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.799 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:30:34.799 00:30:34.799 --- 10.0.0.1 ping statistics --- 00:30:34.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.799 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1225570 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1225570 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1225570 ']' 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:34.799 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.799 [2024-07-13 15:42:05.355192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:34.799 [2024-07-13 15:42:05.355281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.799 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.799 [2024-07-13 15:42:05.393021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:34.799 [2024-07-13 15:42:05.425431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.799 [2024-07-13 15:42:05.514156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.799 [2024-07-13 15:42:05.514216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.799 [2024-07-13 15:42:05.514232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.799 [2024-07-13 15:42:05.514246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.799 [2024-07-13 15:42:05.514258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.799 [2024-07-13 15:42:05.514288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 [2024-07-13 15:42:05.662982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 [2024-07-13 15:42:05.671166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 null0 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 null1 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1225596 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1225596 /tmp/host.sock 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1225596 ']' 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:35.058 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.058 15:42:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 [2024-07-13 15:42:05.747709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:35.058 [2024-07-13 15:42:05.747789] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225596 ] 00:30:35.058 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.058 [2024-07-13 15:42:05.784871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:35.058 [2024-07-13 15:42:05.815312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.316 [2024-07-13 15:42:05.907781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:35.316 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 [2024-07-13 15:42:06.304822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:35.575 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:35.853 15:42:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:36.464 [2024-07-13 15:42:07.092036] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:36.464 [2024-07-13 15:42:07.092063] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:36.464 [2024-07-13 15:42:07.092093] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:36.464 [2024-07-13 15:42:07.178384] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:36.723 [2024-07-13 15:42:07.283272] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:36.723 [2024-07-13 15:42:07.283300] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:36.723 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:36.982 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.241 [2024-07-13 15:42:07.921473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:37.241 [2024-07-13 15:42:07.922712] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:37.241 [2024-07-13 15:42:07.922749] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:37.241 15:42:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:37.500 [2024-07-13 15:42:08.010542] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:37.500 15:42:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:37.500 [2024-07-13 15:42:08.115273] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:37.500 [2024-07-13 15:42:08.115300] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:37.500 [2024-07-13 15:42:08.115312] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.436 [2024-07-13 15:42:09.133807] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:38.436 [2024-07-13 15:42:09.133851] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:38.436 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:38.437 [2024-07-13 15:42:09.141813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.437 [2024-07-13 15:42:09.141862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.437 [2024-07-13 15:42:09.141891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.437 [2024-07-13 15:42:09.141915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.437 [2024-07-13 15:42:09.141929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.437 [2024-07-13 15:42:09.141944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.437 [2024-07-13 15:42:09.141959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.437 [2024-07-13 15:42:09.141972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.437 [2024-07-13 15:42:09.141986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.437 [2024-07-13 15:42:09.151818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.437 [2024-07-13 15:42:09.161885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.437 [2024-07-13 15:42:09.162184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-13 15:42:09.162216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.437 [2024-07-13 15:42:09.162235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.437 [2024-07-13 15:42:09.162261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.437 [2024-07-13 15:42:09.162309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.437 [2024-07-13 15:42:09.162330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.437 [2024-07-13 15:42:09.162349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.437 [2024-07-13 15:42:09.162372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.437 [2024-07-13 15:42:09.171983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.437 [2024-07-13 15:42:09.172192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-13 15:42:09.172223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.437 [2024-07-13 15:42:09.172242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.437 [2024-07-13 15:42:09.172266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.437 [2024-07-13 15:42:09.172288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.437 [2024-07-13 15:42:09.172303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.437 [2024-07-13 15:42:09.172318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.437 [2024-07-13 15:42:09.172338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:38.437 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:38.437 [2024-07-13 15:42:09.182057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.437 [2024-07-13 15:42:09.182237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-13 15:42:09.182267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.437 [2024-07-13 15:42:09.182283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.437 [2024-07-13 15:42:09.182306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.437 [2024-07-13 15:42:09.182326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.437 [2024-07-13 15:42:09.182346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.437 [2024-07-13 15:42:09.182359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.437 [2024-07-13 15:42:09.183303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.437 [2024-07-13 15:42:09.192131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.437 [2024-07-13 15:42:09.192381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-13 15:42:09.192412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.437 [2024-07-13 15:42:09.192429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.437 [2024-07-13 15:42:09.192454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.437 [2024-07-13 15:42:09.192491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.437 [2024-07-13 15:42:09.192510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.437 [2024-07-13 15:42:09.192525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.437 [2024-07-13 15:42:09.192546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.697 [2024-07-13 15:42:09.202231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.697 [2024-07-13 15:42:09.202441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.697 [2024-07-13 15:42:09.202470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.697 [2024-07-13 15:42:09.202487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.698 [2024-07-13 15:42:09.202510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.698 [2024-07-13 15:42:09.202557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.698 [2024-07-13 15:42:09.202576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.698 [2024-07-13 15:42:09.202589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.698 [2024-07-13 15:42:09.202608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.698 [2024-07-13 15:42:09.212313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.698 [2024-07-13 15:42:09.212531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.698 [2024-07-13 15:42:09.212563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.698 [2024-07-13 15:42:09.212581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.698 [2024-07-13 15:42:09.212605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.698 [2024-07-13 15:42:09.212641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.698 [2024-07-13 15:42:09.212661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.698 [2024-07-13 15:42:09.212676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.698 [2024-07-13 15:42:09.212697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:38.698 [2024-07-13 15:42:09.222396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.698 [2024-07-13 15:42:09.222609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.698 [2024-07-13 15:42:09.222640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.698 [2024-07-13 15:42:09.222658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.698 [2024-07-13 15:42:09.222682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.698 [2024-07-13 15:42:09.222894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.698 [2024-07-13 15:42:09.222934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.698 [2024-07-13 15:42:09.222948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.698 [2024-07-13 15:42:09.222968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.698 [2024-07-13 15:42:09.232476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.698 [2024-07-13 15:42:09.232688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.698 [2024-07-13 15:42:09.232715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.698 [2024-07-13 15:42:09.232732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.698 [2024-07-13 15:42:09.232753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.698 [2024-07-13 15:42:09.232785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.698 [2024-07-13 15:42:09.232803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.698 [2024-07-13 15:42:09.232816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.698 [2024-07-13 15:42:09.232835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.698 [2024-07-13 15:42:09.242555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.698 [2024-07-13 15:42:09.242758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.698 [2024-07-13 15:42:09.242788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.698 [2024-07-13 15:42:09.242806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.698 [2024-07-13 15:42:09.242830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.698 [2024-07-13 15:42:09.242889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.698 [2024-07-13 15:42:09.242926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.698 [2024-07-13 15:42:09.242941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.698 [2024-07-13 15:42:09.242960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.698 [2024-07-13 15:42:09.252638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:38.698 [2024-07-13 15:42:09.252876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.698 [2024-07-13 15:42:09.252921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb096c0 with addr=10.0.0.2, port=4420 00:30:38.698 [2024-07-13 15:42:09.252937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb096c0 is same with the state(5) to be set 00:30:38.698 [2024-07-13 15:42:09.252958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb096c0 (9): Bad file descriptor 00:30:38.698 [2024-07-13 15:42:09.252991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:38.698 [2024-07-13 15:42:09.253008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:38.698 [2024-07-13 15:42:09.253021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:38.698 [2024-07-13 15:42:09.253041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:30:38.698 15:42:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:38.698 [2024-07-13 15:42:09.261079] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:38.698 [2024-07-13 15:42:09.261111] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:39.635 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.893 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:39.893 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.893 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:39.893 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:39.893 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.893 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.894 15:42:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.829 [2024-07-13 15:42:11.492698] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:40.829 [2024-07-13 15:42:11.492729] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:40.830 [2024-07-13 15:42:11.492756] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:40.830 [2024-07-13 15:42:11.580088] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:41.090 [2024-07-13 15:42:11.647325] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:41.090 [2024-07-13 15:42:11.647366] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.090 request: 00:30:41.090 { 00:30:41.090 "name": "nvme", 00:30:41.090 "trtype": "tcp", 00:30:41.090 "traddr": "10.0.0.2", 00:30:41.090 "adrfam": "ipv4", 00:30:41.090 "trsvcid": "8009", 00:30:41.090 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:41.090 "wait_for_attach": true, 00:30:41.090 "method": "bdev_nvme_start_discovery", 00:30:41.090 "req_id": 1 00:30:41.090 } 00:30:41.090 Got JSON-RPC error response 00:30:41.090 response: 00:30:41.090 { 00:30:41.090 "code": -17, 00:30:41.090 "message": "File exists" 00:30:41.090 } 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.090 request: 00:30:41.090 { 00:30:41.090 "name": "nvme_second", 00:30:41.090 "trtype": "tcp", 00:30:41.090 "traddr": "10.0.0.2", 00:30:41.090 "adrfam": "ipv4", 00:30:41.090 "trsvcid": "8009", 00:30:41.090 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:41.090 "wait_for_attach": true, 00:30:41.090 "method": "bdev_nvme_start_discovery", 00:30:41.090 "req_id": 1 00:30:41.090 } 00:30:41.090 Got JSON-RPC error response 00:30:41.090 response: 00:30:41.090 { 00:30:41.090 "code": -17, 00:30:41.090 "message": "File exists" 00:30:41.090 } 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:41.090 15:42:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:42.470 [2024-07-13 15:42:12.850832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.470 [2024-07-13 15:42:12.850901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23810 with addr=10.0.0.2, port=8010 00:30:42.470 [2024-07-13 15:42:12.850936] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:42.470 [2024-07-13 15:42:12.850952] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:42.470 [2024-07-13 15:42:12.850966] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:43.406 [2024-07-13 15:42:13.853301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.406 [2024-07-13 15:42:13.853379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb23810 with addr=10.0.0.2, port=8010 00:30:43.406 [2024-07-13 15:42:13.853412] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:43.406 [2024-07-13 15:42:13.853428] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:43.406 [2024-07-13 15:42:13.853443] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:44.347 [2024-07-13 15:42:14.855445] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:44.347 request: 00:30:44.347 { 00:30:44.347 "name": "nvme_second", 00:30:44.347 "trtype": "tcp", 00:30:44.347 "traddr": "10.0.0.2", 00:30:44.347 "adrfam": "ipv4", 00:30:44.347 "trsvcid": "8010", 00:30:44.347 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:44.347 "wait_for_attach": false, 00:30:44.347 "attach_timeout_ms": 3000, 00:30:44.347 "method": "bdev_nvme_start_discovery", 00:30:44.347 "req_id": 1 00:30:44.347 } 00:30:44.347 Got JSON-RPC error response 00:30:44.347 response: 00:30:44.347 { 00:30:44.347 "code": -110, 00:30:44.347 "message": "Connection timed out" 00:30:44.347 } 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1225596 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:44.347 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:44.348 rmmod nvme_tcp 00:30:44.348 rmmod nvme_fabrics 00:30:44.348 rmmod nvme_keyring 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1225570 ']' 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1225570 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1225570 ']' 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1225570 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225570 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225570' 00:30:44.348 killing process with pid 1225570 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1225570 00:30:44.348 15:42:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1225570 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.607 15:42:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:47.141 00:30:47.141 real 0m14.095s 00:30:47.141 user 0m20.853s 00:30:47.141 sys 0m2.883s 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.141 ************************************ 00:30:47.141 END TEST nvmf_host_discovery 00:30:47.141 ************************************ 00:30:47.141 15:42:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:47.141 15:42:17 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:47.141 15:42:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:47.141 15:42:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.141 15:42:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.141 ************************************ 00:30:47.141 START TEST nvmf_host_multipath_status 00:30:47.141 ************************************ 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:47.141 * Looking for test storage... 00:30:47.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:47.141 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:47.142 15:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:49.041 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.041 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:49.041 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:49.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:49.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:49.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:49.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:30:49.042 00:30:49.042 --- 10.0.0.2 ping statistics --- 00:30:49.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.042 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:30:49.042 00:30:49.042 --- 10.0.0.1 ping statistics --- 00:30:49.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.042 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.042 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1229274 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1229274 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1229274 ']' 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:49.043 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:49.043 [2024-07-13 15:42:19.568361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:30:49.043 [2024-07-13 15:42:19.568441] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.043 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.043 [2024-07-13 15:42:19.605622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:49.043 [2024-07-13 15:42:19.636103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:49.043 [2024-07-13 15:42:19.725691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.043 [2024-07-13 15:42:19.725754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.043 [2024-07-13 15:42:19.725770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.043 [2024-07-13 15:42:19.725783] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.043 [2024-07-13 15:42:19.725795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.043 [2024-07-13 15:42:19.725883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.043 [2024-07-13 15:42:19.725889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1229274 00:30:49.301 15:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:49.558 [2024-07-13 15:42:20.093067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.558 15:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:49.814 Malloc0 00:30:49.814 15:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:50.071 15:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.329 15:42:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.329 [2024-07-13 15:42:21.089912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.587 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:50.864 [2024-07-13 15:42:21.378825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1229552 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1229552 /var/tmp/bdevperf.sock 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1229552 ']' 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:50.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.864 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:51.122 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.122 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:51.122 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:51.380 15:42:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:51.638 Nvme0n1 00:30:51.638 15:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:52.205 Nvme0n1 00:30:52.205 15:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:52.205 15:42:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:54.105 15:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:54.105 15:42:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:54.362 15:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:54.930 15:42:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:55.861 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:55.861 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:55.861 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.861 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.119 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.119 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:56.119 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.119 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.378 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.378 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.378 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.378 15:42:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.637 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:56.895 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.895 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:56.895 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.895 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.154 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.154 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:57.154 15:42:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:57.412 15:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:57.672 15:42:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.053 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.311 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.311 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.311 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.311 15:42:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.569 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.569 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.569 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.569 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.827 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.827 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:59.827 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.827 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.085 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.085 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:00.085 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.085 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.344 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.344 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:00.344 15:42:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.603 15:42:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:00.863 15:42:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:01.801 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:01.801 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:01.801 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.801 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:02.059 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.059 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:02.059 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.059 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.316 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.316 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.316 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.316 15:42:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.573 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.573 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.573 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.573 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.830 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.830 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:02.830 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.830 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:03.087 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.087 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:03.087 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.087 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.344 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.344 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:03.344 15:42:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:03.602 15:42:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:03.861 15:42:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:04.800 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:04.800 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:04.800 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.800 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.101 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.101 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:05.101 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.101 15:42:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.382 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.382 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.382 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.382 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.639 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.639 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.639 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.639 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:05.896 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.897 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:05.897 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.897 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.154 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.154 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:06.154 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.154 15:42:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.412 15:42:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.412 15:42:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:06.412 15:42:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:06.670 15:42:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:06.930 15:42:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:07.861 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:07.861 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:07.861 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.861 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.118 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.118 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:08.118 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.118 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:08.376 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.376 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:08.376 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.376 15:42:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:08.633 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.633 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:08.633 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.633 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:08.890 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.890 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:08.890 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.890 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.148 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.148 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:09.148 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.148 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.406 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.406 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:09.406 15:42:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:09.664 15:42:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:09.923 15:42:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:10.861 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:10.861 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:10.861 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.861 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.118 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.118 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:11.118 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.118 15:42:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.376 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.376 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.376 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.376 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:11.633 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.633 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:11.633 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.633 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:11.891 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.891 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:11.891 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.891 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.149 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.149 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.149 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.149 15:42:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.406 15:42:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.406 15:42:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:12.663 15:42:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:12.663 15:42:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:12.919 15:42:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:13.177 15:42:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:14.109 15:42:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:14.109 15:42:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.109 15:42:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.109 15:42:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.368 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.368 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:14.368 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.368 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.626 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.626 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.626 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.626 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.883 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.883 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.883 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.883 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.141 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.141 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.141 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.141 15:42:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.399 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.399 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.399 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.399 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.658 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.658 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:15.658 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:15.916 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:16.174 15:42:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:17.110 15:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:17.110 15:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:17.110 15:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.110 15:42:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.368 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.368 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:17.368 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.368 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.626 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.626 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.626 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.626 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:17.884 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.884 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:17.884 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.884 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.142 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.142 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:18.142 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.142 15:42:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.457 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.457 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:18.457 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.457 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.715 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.715 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:18.715 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:18.974 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:19.234 15:42:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:20.174 15:42:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:20.174 15:42:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:20.174 15:42:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.174 15:42:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.433 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.433 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:20.433 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.433 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.691 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.691 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.691 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.691 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:20.949 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.949 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:20.949 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.950 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.208 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.208 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:21.208 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.208 15:42:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.468 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.468 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:21.468 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.468 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:21.726 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.726 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:21.726 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:21.984 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:22.241 15:42:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:23.172 15:42:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:23.172 15:42:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:23.172 15:42:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.172 15:42:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:23.430 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.430 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:23.430 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.430 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:23.689 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.689 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:23.689 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.689 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:23.946 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.946 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:23.946 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.946 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:24.203 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.203 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:24.203 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.203 15:42:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:24.461 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.461 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:24.462 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.462 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.719 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.719 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1229552 00:31:24.719 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1229552 ']' 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1229552 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229552 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229552' 00:31:24.720 killing process with pid 1229552 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1229552 00:31:24.720 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1229552 00:31:24.990 Connection closed with partial response: 00:31:24.990 00:31:24.990 00:31:24.990 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1229552 00:31:24.990 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:24.990 [2024-07-13 15:42:21.436699] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:24.990 [2024-07-13 15:42:21.436776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229552 ] 00:31:24.990 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.990 [2024-07-13 15:42:21.467896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:24.990 [2024-07-13 15:42:21.496364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.990 [2024-07-13 15:42:21.583874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:24.990 Running I/O for 90 seconds... 00:31:24.990 [2024-07-13 15:42:37.241484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.241940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.241957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.990 [2024-07-13 15:42:37.243468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:24.990 [2024-07-13 15:42:37.243493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.243970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.243987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.244969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.244985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.245012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.245029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:24.991 [2024-07-13 15:42:37.245058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.991 [2024-07-13 15:42:37.245075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.245981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.245998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.992 [2024-07-13 15:42:37.246092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.992 [2024-07-13 15:42:37.246136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:24.992 [2024-07-13 15:42:37.246620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.992 [2024-07-13 15:42:37.246657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.246715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.246765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.246812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.246860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.246918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.246966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.246997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:37.247364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:37.247670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.993 [2024-07-13 15:42:37.247686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.838977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.838993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.839016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.839032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.839053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.993 [2024-07-13 15:42:52.839070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:24.993 [2024-07-13 15:42:52.839115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.994 [2024-07-13 15:42:52.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.994 [2024-07-13 15:42:52.839712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.994 [2024-07-13 15:42:52.839768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.994 [2024-07-13 15:42:52.839808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.839968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.839985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.840007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.840024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.840047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.840063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.840085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.840102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:24.994 [2024-07-13 15:42:52.841488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.994 [2024-07-13 15:42:52.841526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.995 [2024-07-13 15:42:52.841888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.841940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.841968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.841985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.842007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.842024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.842046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.842063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.842085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.842101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.842123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.842139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.842169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.842185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.842208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.842225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:24.995 [2024-07-13 15:42:52.843479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.995 [2024-07-13 15:42:52.843495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.843831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.843878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.843920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.843962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.843985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.844307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.844345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.844490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.996 [2024-07-13 15:42:52.844506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.845318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.845358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.845412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.845451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.996 [2024-07-13 15:42:52.845489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:24.996 [2024-07-13 15:42:52.845510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.845526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.845565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.845602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.845640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.845708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.845747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.845786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.845825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.845863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.845919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.845958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.845980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.845997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.846042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.846081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.846120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.846159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.846203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.846242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.846265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.846281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.847477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.847524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.847565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.847604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.847642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.997 [2024-07-13 15:42:52.847682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.847720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.847759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.847797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.847839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.997 [2024-07-13 15:42:52.847936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:24.997 [2024-07-13 15:42:52.847958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.847975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.847997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.848244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.848339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.848974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.848998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.849521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.849559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.849598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.849637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.849704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.849722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.850898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.850923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.850951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.998 [2024-07-13 15:42:52.850970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.850993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.851010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.851032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.851048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:24.998 [2024-07-13 15:42:52.851070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.998 [2024-07-13 15:42:52.851087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.851718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.851974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.851996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.852013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.852035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.852051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.852073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.852090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.852111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.852132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.852155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.852186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.852208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.852224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.852246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:24.999 [2024-07-13 15:42:52.852262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.855901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.855928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.855956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.855975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.855997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.999 [2024-07-13 15:42:52.856014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:24.999 [2024-07-13 15:42:52.856036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.856706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.856968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.856985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.857025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.857063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.857102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.857140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.857198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.857251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.857291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.857329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.857366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.000 [2024-07-13 15:42:52.857419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:25.000 [2024-07-13 15:42:52.857442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.000 [2024-07-13 15:42:52.857459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.857496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.857765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.857815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.857879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.857922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.857961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.857983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.858000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.858022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.858038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.858076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.858098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.858115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.858139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.858159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.001 [2024-07-13 15:42:52.860671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.860731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.860780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.860824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.860902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.860946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.860969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.860986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.861008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.861025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.861047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.001 [2024-07-13 15:42:52.861064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:25.001 [2024-07-13 15:42:52.861086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.861698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.861844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.861887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.862476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.862517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.862637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.002 [2024-07-13 15:42:52.862839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.862906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:25.002 [2024-07-13 15:42:52.862930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.002 [2024-07-13 15:42:52.862947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.862969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.862986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.863248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.863286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.863322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.863421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.863437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.003 [2024-07-13 15:42:52.864726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.003 [2024-07-13 15:42:52.864815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:25.003 [2024-07-13 15:42:52.864837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.864852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.864897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.864915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.864955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.864982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.864999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.865037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.865076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.865114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.865152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.865761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.865809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.865849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.865900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.865939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.865962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.865978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.866017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.866061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.866142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.866212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.866278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.866295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.867545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.867590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.867629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.867668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.867722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.867801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.867854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.867919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.004 [2024-07-13 15:42:52.867958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.867980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.867996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.004 [2024-07-13 15:42:52.868018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.004 [2024-07-13 15:42:52.868034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.868150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.868203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.868459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.868552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.868567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.870806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.870833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.870874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.870894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.870918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.870934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.870957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.870974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.870996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.871278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.871314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.871365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.871402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.871455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.005 [2024-07-13 15:42:52.871493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.005 [2024-07-13 15:42:52.871574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:25.005 [2024-07-13 15:42:52.871597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.871614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.871692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.871768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.871968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.871990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.872007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.872051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.872957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.872981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.873026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.873066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.873105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.006 [2024-07-13 15:42:52.873143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.873207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.873573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:25.006 [2024-07-13 15:42:52.873601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.006 [2024-07-13 15:42:52.873619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.873659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.873933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.873950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.875834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.875974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.875996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.876012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.876034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.007 [2024-07-13 15:42:52.876050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.876072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.876088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.876110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.876126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.876167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.876183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.876205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.007 [2024-07-13 15:42:52.876235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:25.007 [2024-07-13 15:42:52.876256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.876271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.876292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.876311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.876332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.876348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.876369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.876385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.878972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.878999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.008 [2024-07-13 15:42:52.879788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.008 [2024-07-13 15:42:52.879871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:25.008 [2024-07-13 15:42:52.879896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.879912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.881391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.881453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.881493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.881532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.881575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.881936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.881975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.881997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.882300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.882337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.882374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.882412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.009 [2024-07-13 15:42:52.882528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.009 [2024-07-13 15:42:52.882604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:25.009 [2024-07-13 15:42:52.882625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.882641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.882663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.882679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.882701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.882716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.882738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.882754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.882775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.882791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.882813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.882829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.885801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.885845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.885883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.885903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.885927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.885944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.885966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.885982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.010 [2024-07-13 15:42:52.886932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:25.010 [2024-07-13 15:42:52.886954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.010 [2024-07-13 15:42:52.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.886992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.011 [2024-07-13 15:42:52.887760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.887781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.887797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.888648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.888672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.888699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.888717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.888740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.888757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:25.011 [2024-07-13 15:42:52.888785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.011 [2024-07-13 15:42:52.888802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.889442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.889488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.889527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.889566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.889844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.889906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.889929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.889946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.891580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.891639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.891679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.891717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.891755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.891793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.891831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.891919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.891980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.892001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.892040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.892079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.892117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.892155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.892193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.012 [2024-07-13 15:42:52.892231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:25.012 [2024-07-13 15:42:52.892269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.012 [2024-07-13 15:42:52.892286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.892951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.892977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.892995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.893017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.893033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.893055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.893071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.893094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.893111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.895247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.895314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.895354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.895629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.013 [2024-07-13 15:42:52.895668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.013 [2024-07-13 15:42:52.895706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:25.013 [2024-07-13 15:42:52.895728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.014 [2024-07-13 15:42:52.895744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.895766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.014 [2024-07-13 15:42:52.895797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.895818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.014 [2024-07-13 15:42:52.895833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.014 [2024-07-13 15:42:52.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.895917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.014 [2024-07-13 15:42:52.895934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.895955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.014 [2024-07-13 15:42:52.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.896009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.014 [2024-07-13 15:42:52.896026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.896048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.014 [2024-07-13 15:42:52.896064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:25.014 [2024-07-13 15:42:52.896087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:25.014 [2024-07-13 15:42:52.896107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:25.014 Received shutdown signal, test time was about 32.491226 seconds 00:31:25.014 00:31:25.014 Latency(us) 00:31:25.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.014 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:25.014 Verification LBA range: start 0x0 length 0x4000 00:31:25.014 Nvme0n1 : 32.49 7956.14 31.08 0.00 0.00 16041.71 374.71 4026531.84 00:31:25.014 =================================================================================================================== 00:31:25.014 Total : 7956.14 31.08 0.00 0.00 16041.71 374.71 4026531.84 00:31:25.014 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:25.273 rmmod nvme_tcp 00:31:25.273 rmmod nvme_fabrics 00:31:25.273 rmmod nvme_keyring 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1229274 ']' 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1229274 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1229274 ']' 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1229274 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1229274 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1229274' 00:31:25.273 killing process with pid 1229274 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1229274 00:31:25.273 15:42:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1229274 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.531 15:42:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.068 15:42:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:28.068 00:31:28.068 real 0m40.930s 00:31:28.068 user 2m3.566s 00:31:28.068 sys 0m10.404s 00:31:28.068 15:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:28.068 15:42:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:28.068 ************************************ 00:31:28.068 END TEST nvmf_host_multipath_status 00:31:28.068 ************************************ 00:31:28.068 15:42:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:28.068 15:42:58 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:28.068 15:42:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:28.068 15:42:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:28.068 15:42:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:28.068 ************************************ 00:31:28.068 START TEST nvmf_discovery_remove_ifc 00:31:28.068 ************************************ 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:28.068 * Looking for test storage... 00:31:28.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.068 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:28.069 15:42:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:29.970 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:29.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:29.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:29.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:29.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:29.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:29.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:31:29.971 00:31:29.971 --- 10.0.0.2 ping statistics --- 00:31:29.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.971 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:29.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:29.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:31:29.971 00:31:29.971 --- 10.0.0.1 ping statistics --- 00:31:29.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:29.971 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1235622 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1235622 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1235622 ']' 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:29.971 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.971 [2024-07-13 15:43:00.527629] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:29.971 [2024-07-13 15:43:00.527717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.971 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.972 [2024-07-13 15:43:00.566041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:29.972 [2024-07-13 15:43:00.598672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.972 [2024-07-13 15:43:00.689716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.972 [2024-07-13 15:43:00.689782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.972 [2024-07-13 15:43:00.689806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.972 [2024-07-13 15:43:00.689821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.972 [2024-07-13 15:43:00.689832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.972 [2024-07-13 15:43:00.689879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.229 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.229 [2024-07-13 15:43:00.846584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.230 [2024-07-13 15:43:00.854755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:30.230 null0 00:31:30.230 [2024-07-13 15:43:00.886708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1235756 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1235756 /tmp/host.sock 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1235756 ']' 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:30.230 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:30.230 15:43:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.230 [2024-07-13 15:43:00.951764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:31:30.230 [2024-07-13 15:43:00.951842] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235756 ] 00:31:30.230 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.230 [2024-07-13 15:43:00.984655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:30.487 [2024-07-13 15:43:01.015011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.488 [2024-07-13 15:43:01.106351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.488 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.745 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.745 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:30.745 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.745 15:43:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.681 [2024-07-13 15:43:02.304653] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:31.681 [2024-07-13 15:43:02.304684] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:31.681 [2024-07-13 15:43:02.304711] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:31.681 [2024-07-13 15:43:02.433160] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:31.948 [2024-07-13 15:43:02.495710] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:31.948 [2024-07-13 15:43:02.495785] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:31.948 [2024-07-13 15:43:02.495829] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:31.948 [2024-07-13 15:43:02.495856] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:31.948 [2024-07-13 15:43:02.495893] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.948 [2024-07-13 15:43:02.502825] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x644370 was disconnected and freed. delete nvme_qpair. 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:31.948 15:43:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.891 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.150 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.150 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:33.150 15:43:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:34.111 15:43:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:35.046 15:43:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:36.422 15:43:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:37.357 15:43:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:37.357 [2024-07-13 15:43:07.936985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:37.357 [2024-07-13 15:43:07.937060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.357 [2024-07-13 15:43:07.937081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.357 [2024-07-13 15:43:07.937099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.357 [2024-07-13 15:43:07.937112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.357 [2024-07-13 15:43:07.937125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.357 [2024-07-13 15:43:07.937137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.357 [2024-07-13 15:43:07.937150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.357 [2024-07-13 15:43:07.937173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.357 [2024-07-13 15:43:07.937186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:37.357 [2024-07-13 15:43:07.937198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.357 [2024-07-13 15:43:07.937225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60ad50 is same with the state(5) to be set 00:31:37.357 [2024-07-13 15:43:07.947010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60ad50 (9): Bad file descriptor 00:31:37.357 [2024-07-13 15:43:07.957053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:38.292 15:43:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:38.292 [2024-07-13 15:43:09.021904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:38.292 [2024-07-13 15:43:09.021951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x60ad50 with addr=10.0.0.2, port=4420 00:31:38.292 [2024-07-13 15:43:09.021973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60ad50 is same with the state(5) to be set 00:31:38.292 [2024-07-13 15:43:09.022004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60ad50 (9): Bad file descriptor 00:31:38.292 [2024-07-13 15:43:09.022400] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:38.292 [2024-07-13 15:43:09.022436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:38.292 [2024-07-13 15:43:09.022453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:38.292 [2024-07-13 15:43:09.022473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:38.292 [2024-07-13 15:43:09.022496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:38.292 [2024-07-13 15:43:09.022516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:38.292 15:43:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.292 15:43:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:38.292 15:43:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:39.668 [2024-07-13 15:43:10.025013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:39.668 [2024-07-13 15:43:10.025052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:39.668 [2024-07-13 15:43:10.025087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:39.668 [2024-07-13 15:43:10.025106] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:39.668 [2024-07-13 15:43:10.025137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:39.668 [2024-07-13 15:43:10.025187] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:39.668 [2024-07-13 15:43:10.025250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.668 [2024-07-13 15:43:10.025283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.668 [2024-07-13 15:43:10.025310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.668 [2024-07-13 15:43:10.025334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.668 [2024-07-13 15:43:10.025358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.668 [2024-07-13 15:43:10.025391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.668 [2024-07-13 15:43:10.025416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.668 [2024-07-13 15:43:10.025439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.668 [2024-07-13 15:43:10.025463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:39.668 [2024-07-13 15:43:10.025488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:39.668 [2024-07-13 15:43:10.025508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:39.668 [2024-07-13 15:43:10.025633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60a210 (9): Bad file descriptor 00:31:39.668 [2024-07-13 15:43:10.026664] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:39.668 [2024-07-13 15:43:10.026699] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:39.668 15:43:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:40.599 15:43:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:41.537 [2024-07-13 15:43:12.040315] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:41.537 [2024-07-13 15:43:12.040349] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:41.537 [2024-07-13 15:43:12.040375] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:41.537 [2024-07-13 15:43:12.166797] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:41.537 15:43:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:41.795 [2024-07-13 15:43:12.393389] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:41.795 [2024-07-13 15:43:12.393453] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:41.795 [2024-07-13 15:43:12.393491] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:41.795 [2024-07-13 15:43:12.393518] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:41.795 [2024-07-13 15:43:12.393534] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:41.795 [2024-07-13 15:43:12.399391] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x618550 was disconnected and freed. delete nvme_qpair. 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1235756 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1235756 ']' 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1235756 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235756 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235756' 00:31:42.731 killing process with pid 1235756 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1235756 00:31:42.731 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1235756 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:42.989 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:42.989 rmmod nvme_tcp 00:31:42.989 rmmod nvme_fabrics 00:31:42.990 rmmod nvme_keyring 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1235622 ']' 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1235622 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1235622 ']' 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1235622 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235622 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235622' 00:31:42.990 killing process with pid 1235622 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1235622 00:31:42.990 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1235622 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:43.247 15:43:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.181 15:43:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:45.181 00:31:45.181 real 0m17.587s 00:31:45.181 user 0m25.566s 00:31:45.181 sys 0m2.961s 00:31:45.181 15:43:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:45.181 15:43:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:45.181 ************************************ 00:31:45.181 END TEST nvmf_discovery_remove_ifc 00:31:45.181 ************************************ 00:31:45.181 15:43:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:45.181 15:43:15 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:45.181 15:43:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:45.181 15:43:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:45.181 15:43:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:45.440 ************************************ 00:31:45.440 START TEST nvmf_identify_kernel_target 00:31:45.441 ************************************ 00:31:45.441 15:43:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:45.441 * Looking for test storage... 00:31:45.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:45.441 15:43:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:47.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:47.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.343 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:47.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:47.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.344 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.603 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:47.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:31:47.604 00:31:47.604 --- 10.0.0.2 ping statistics --- 00:31:47.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.604 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:47.604 00:31:47.604 --- 10.0.0.1 ping statistics --- 00:31:47.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.604 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:47.604 15:43:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:48.538 Waiting for block devices as requested 00:31:48.538 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:48.821 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:48.821 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:49.082 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:49.082 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:49.082 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:49.082 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:49.341 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:49.341 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:49.341 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:49.341 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:49.601 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:49.601 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:49.601 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:49.601 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:49.861 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:49.861 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:50.123 No valid GPT data, bailing 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:31:50.123 00:31:50.123 Discovery Log Number of Records 2, Generation counter 2 00:31:50.123 =====Discovery Log Entry 0====== 00:31:50.123 trtype: tcp 00:31:50.123 adrfam: ipv4 00:31:50.123 subtype: current discovery subsystem 00:31:50.123 treq: not specified, sq flow control disable supported 00:31:50.123 portid: 1 00:31:50.123 trsvcid: 4420 00:31:50.123 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:50.123 traddr: 10.0.0.1 00:31:50.123 eflags: none 00:31:50.123 sectype: none 00:31:50.123 =====Discovery Log Entry 1====== 00:31:50.123 trtype: tcp 00:31:50.123 adrfam: ipv4 00:31:50.123 subtype: nvme subsystem 00:31:50.123 treq: not specified, sq flow control disable supported 00:31:50.123 portid: 1 00:31:50.123 trsvcid: 4420 00:31:50.123 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:50.123 traddr: 10.0.0.1 00:31:50.123 eflags: none 00:31:50.123 sectype: none 00:31:50.123 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:50.123 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:50.123 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.123 ===================================================== 00:31:50.123 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:50.123 ===================================================== 00:31:50.123 Controller Capabilities/Features 00:31:50.123 ================================ 00:31:50.123 Vendor ID: 0000 00:31:50.123 Subsystem Vendor ID: 0000 00:31:50.123 Serial Number: d722277f7e2342985e00 00:31:50.123 Model Number: Linux 00:31:50.123 Firmware Version: 6.7.0-68 00:31:50.123 Recommended Arb Burst: 0 00:31:50.123 IEEE OUI Identifier: 00 00 00 00:31:50.123 Multi-path I/O 00:31:50.123 May have multiple subsystem ports: No 00:31:50.123 May have multiple controllers: No 00:31:50.123 Associated with SR-IOV VF: No 00:31:50.123 Max Data Transfer Size: Unlimited 00:31:50.123 Max Number of Namespaces: 0 00:31:50.123 Max Number of I/O Queues: 1024 00:31:50.123 NVMe Specification Version (VS): 1.3 00:31:50.123 NVMe Specification Version (Identify): 1.3 00:31:50.123 Maximum Queue Entries: 1024 00:31:50.123 Contiguous Queues Required: No 00:31:50.123 Arbitration Mechanisms Supported 00:31:50.123 Weighted Round Robin: Not Supported 00:31:50.123 Vendor Specific: Not Supported 00:31:50.123 Reset Timeout: 7500 ms 00:31:50.123 Doorbell Stride: 4 bytes 00:31:50.123 NVM Subsystem Reset: Not Supported 00:31:50.123 Command Sets Supported 00:31:50.123 NVM Command Set: Supported 00:31:50.123 Boot Partition: Not Supported 00:31:50.123 Memory Page Size Minimum: 4096 bytes 00:31:50.123 Memory Page Size Maximum: 4096 bytes 00:31:50.123 Persistent Memory Region: Not Supported 00:31:50.123 Optional Asynchronous Events Supported 00:31:50.123 Namespace Attribute Notices: Not Supported 00:31:50.123 Firmware Activation Notices: Not Supported 00:31:50.123 ANA Change Notices: Not Supported 00:31:50.123 PLE Aggregate Log Change Notices: Not Supported 00:31:50.123 LBA Status Info Alert Notices: Not Supported 00:31:50.123 EGE Aggregate Log Change Notices: Not Supported 00:31:50.123 Normal NVM Subsystem Shutdown event: Not Supported 00:31:50.123 Zone Descriptor Change Notices: Not Supported 00:31:50.123 Discovery Log Change Notices: Supported 00:31:50.123 Controller Attributes 00:31:50.123 128-bit Host Identifier: Not Supported 00:31:50.123 Non-Operational Permissive Mode: Not Supported 00:31:50.123 NVM Sets: Not Supported 00:31:50.123 Read Recovery Levels: Not Supported 00:31:50.123 Endurance Groups: Not Supported 00:31:50.123 Predictable Latency Mode: Not Supported 00:31:50.123 Traffic Based Keep ALive: Not Supported 00:31:50.123 Namespace Granularity: Not Supported 00:31:50.123 SQ Associations: Not Supported 00:31:50.123 UUID List: Not Supported 00:31:50.123 Multi-Domain Subsystem: Not Supported 00:31:50.123 Fixed Capacity Management: Not Supported 00:31:50.123 Variable Capacity Management: Not Supported 00:31:50.123 Delete Endurance Group: Not Supported 00:31:50.123 Delete NVM Set: Not Supported 00:31:50.123 Extended LBA Formats Supported: Not Supported 00:31:50.123 Flexible Data Placement Supported: Not Supported 00:31:50.123 00:31:50.123 Controller Memory Buffer Support 00:31:50.123 ================================ 00:31:50.123 Supported: No 00:31:50.123 00:31:50.123 Persistent Memory Region Support 00:31:50.123 ================================ 00:31:50.123 Supported: No 00:31:50.123 00:31:50.123 Admin Command Set Attributes 00:31:50.123 ============================ 00:31:50.123 Security Send/Receive: Not Supported 00:31:50.123 Format NVM: Not Supported 00:31:50.123 Firmware Activate/Download: Not Supported 00:31:50.123 Namespace Management: Not Supported 00:31:50.123 Device Self-Test: Not Supported 00:31:50.123 Directives: Not Supported 00:31:50.123 NVMe-MI: Not Supported 00:31:50.123 Virtualization Management: Not Supported 00:31:50.123 Doorbell Buffer Config: Not Supported 00:31:50.123 Get LBA Status Capability: Not Supported 00:31:50.123 Command & Feature Lockdown Capability: Not Supported 00:31:50.123 Abort Command Limit: 1 00:31:50.123 Async Event Request Limit: 1 00:31:50.123 Number of Firmware Slots: N/A 00:31:50.123 Firmware Slot 1 Read-Only: N/A 00:31:50.384 Firmware Activation Without Reset: N/A 00:31:50.384 Multiple Update Detection Support: N/A 00:31:50.384 Firmware Update Granularity: No Information Provided 00:31:50.384 Per-Namespace SMART Log: No 00:31:50.384 Asymmetric Namespace Access Log Page: Not Supported 00:31:50.384 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:50.384 Command Effects Log Page: Not Supported 00:31:50.384 Get Log Page Extended Data: Supported 00:31:50.384 Telemetry Log Pages: Not Supported 00:31:50.384 Persistent Event Log Pages: Not Supported 00:31:50.384 Supported Log Pages Log Page: May Support 00:31:50.384 Commands Supported & Effects Log Page: Not Supported 00:31:50.384 Feature Identifiers & Effects Log Page:May Support 00:31:50.384 NVMe-MI Commands & Effects Log Page: May Support 00:31:50.384 Data Area 4 for Telemetry Log: Not Supported 00:31:50.384 Error Log Page Entries Supported: 1 00:31:50.384 Keep Alive: Not Supported 00:31:50.384 00:31:50.384 NVM Command Set Attributes 00:31:50.384 ========================== 00:31:50.384 Submission Queue Entry Size 00:31:50.384 Max: 1 00:31:50.384 Min: 1 00:31:50.384 Completion Queue Entry Size 00:31:50.384 Max: 1 00:31:50.384 Min: 1 00:31:50.384 Number of Namespaces: 0 00:31:50.384 Compare Command: Not Supported 00:31:50.384 Write Uncorrectable Command: Not Supported 00:31:50.384 Dataset Management Command: Not Supported 00:31:50.384 Write Zeroes Command: Not Supported 00:31:50.384 Set Features Save Field: Not Supported 00:31:50.384 Reservations: Not Supported 00:31:50.384 Timestamp: Not Supported 00:31:50.384 Copy: Not Supported 00:31:50.384 Volatile Write Cache: Not Present 00:31:50.384 Atomic Write Unit (Normal): 1 00:31:50.384 Atomic Write Unit (PFail): 1 00:31:50.384 Atomic Compare & Write Unit: 1 00:31:50.384 Fused Compare & Write: Not Supported 00:31:50.384 Scatter-Gather List 00:31:50.384 SGL Command Set: Supported 00:31:50.384 SGL Keyed: Not Supported 00:31:50.384 SGL Bit Bucket Descriptor: Not Supported 00:31:50.384 SGL Metadata Pointer: Not Supported 00:31:50.384 Oversized SGL: Not Supported 00:31:50.384 SGL Metadata Address: Not Supported 00:31:50.384 SGL Offset: Supported 00:31:50.384 Transport SGL Data Block: Not Supported 00:31:50.384 Replay Protected Memory Block: Not Supported 00:31:50.384 00:31:50.384 Firmware Slot Information 00:31:50.384 ========================= 00:31:50.384 Active slot: 0 00:31:50.384 00:31:50.384 00:31:50.384 Error Log 00:31:50.384 ========= 00:31:50.384 00:31:50.384 Active Namespaces 00:31:50.384 ================= 00:31:50.384 Discovery Log Page 00:31:50.384 ================== 00:31:50.384 Generation Counter: 2 00:31:50.384 Number of Records: 2 00:31:50.384 Record Format: 0 00:31:50.384 00:31:50.384 Discovery Log Entry 0 00:31:50.384 ---------------------- 00:31:50.384 Transport Type: 3 (TCP) 00:31:50.384 Address Family: 1 (IPv4) 00:31:50.384 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:50.384 Entry Flags: 00:31:50.384 Duplicate Returned Information: 0 00:31:50.384 Explicit Persistent Connection Support for Discovery: 0 00:31:50.384 Transport Requirements: 00:31:50.385 Secure Channel: Not Specified 00:31:50.385 Port ID: 1 (0x0001) 00:31:50.385 Controller ID: 65535 (0xffff) 00:31:50.385 Admin Max SQ Size: 32 00:31:50.385 Transport Service Identifier: 4420 00:31:50.385 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:50.385 Transport Address: 10.0.0.1 00:31:50.385 Discovery Log Entry 1 00:31:50.385 ---------------------- 00:31:50.385 Transport Type: 3 (TCP) 00:31:50.385 Address Family: 1 (IPv4) 00:31:50.385 Subsystem Type: 2 (NVM Subsystem) 00:31:50.385 Entry Flags: 00:31:50.385 Duplicate Returned Information: 0 00:31:50.385 Explicit Persistent Connection Support for Discovery: 0 00:31:50.385 Transport Requirements: 00:31:50.385 Secure Channel: Not Specified 00:31:50.385 Port ID: 1 (0x0001) 00:31:50.385 Controller ID: 65535 (0xffff) 00:31:50.385 Admin Max SQ Size: 32 00:31:50.385 Transport Service Identifier: 4420 00:31:50.385 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:50.385 Transport Address: 10.0.0.1 00:31:50.385 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:50.385 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.385 get_feature(0x01) failed 00:31:50.385 get_feature(0x02) failed 00:31:50.385 get_feature(0x04) failed 00:31:50.385 ===================================================== 00:31:50.385 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.385 ===================================================== 00:31:50.385 Controller Capabilities/Features 00:31:50.385 ================================ 00:31:50.385 Vendor ID: 0000 00:31:50.385 Subsystem Vendor ID: 0000 00:31:50.385 Serial Number: 53e95d80e7aa7f6ae05b 00:31:50.385 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:50.385 Firmware Version: 6.7.0-68 00:31:50.385 Recommended Arb Burst: 6 00:31:50.385 IEEE OUI Identifier: 00 00 00 00:31:50.385 Multi-path I/O 00:31:50.385 May have multiple subsystem ports: Yes 00:31:50.385 May have multiple controllers: Yes 00:31:50.385 Associated with SR-IOV VF: No 00:31:50.385 Max Data Transfer Size: Unlimited 00:31:50.385 Max Number of Namespaces: 1024 00:31:50.385 Max Number of I/O Queues: 128 00:31:50.385 NVMe Specification Version (VS): 1.3 00:31:50.385 NVMe Specification Version (Identify): 1.3 00:31:50.385 Maximum Queue Entries: 1024 00:31:50.385 Contiguous Queues Required: No 00:31:50.385 Arbitration Mechanisms Supported 00:31:50.385 Weighted Round Robin: Not Supported 00:31:50.385 Vendor Specific: Not Supported 00:31:50.385 Reset Timeout: 7500 ms 00:31:50.385 Doorbell Stride: 4 bytes 00:31:50.385 NVM Subsystem Reset: Not Supported 00:31:50.385 Command Sets Supported 00:31:50.385 NVM Command Set: Supported 00:31:50.385 Boot Partition: Not Supported 00:31:50.385 Memory Page Size Minimum: 4096 bytes 00:31:50.385 Memory Page Size Maximum: 4096 bytes 00:31:50.385 Persistent Memory Region: Not Supported 00:31:50.385 Optional Asynchronous Events Supported 00:31:50.385 Namespace Attribute Notices: Supported 00:31:50.385 Firmware Activation Notices: Not Supported 00:31:50.385 ANA Change Notices: Supported 00:31:50.385 PLE Aggregate Log Change Notices: Not Supported 00:31:50.385 LBA Status Info Alert Notices: Not Supported 00:31:50.385 EGE Aggregate Log Change Notices: Not Supported 00:31:50.385 Normal NVM Subsystem Shutdown event: Not Supported 00:31:50.385 Zone Descriptor Change Notices: Not Supported 00:31:50.385 Discovery Log Change Notices: Not Supported 00:31:50.385 Controller Attributes 00:31:50.385 128-bit Host Identifier: Supported 00:31:50.385 Non-Operational Permissive Mode: Not Supported 00:31:50.385 NVM Sets: Not Supported 00:31:50.385 Read Recovery Levels: Not Supported 00:31:50.385 Endurance Groups: Not Supported 00:31:50.385 Predictable Latency Mode: Not Supported 00:31:50.385 Traffic Based Keep ALive: Supported 00:31:50.385 Namespace Granularity: Not Supported 00:31:50.385 SQ Associations: Not Supported 00:31:50.385 UUID List: Not Supported 00:31:50.385 Multi-Domain Subsystem: Not Supported 00:31:50.385 Fixed Capacity Management: Not Supported 00:31:50.385 Variable Capacity Management: Not Supported 00:31:50.385 Delete Endurance Group: Not Supported 00:31:50.385 Delete NVM Set: Not Supported 00:31:50.385 Extended LBA Formats Supported: Not Supported 00:31:50.385 Flexible Data Placement Supported: Not Supported 00:31:50.385 00:31:50.385 Controller Memory Buffer Support 00:31:50.385 ================================ 00:31:50.385 Supported: No 00:31:50.385 00:31:50.385 Persistent Memory Region Support 00:31:50.385 ================================ 00:31:50.385 Supported: No 00:31:50.385 00:31:50.385 Admin Command Set Attributes 00:31:50.385 ============================ 00:31:50.385 Security Send/Receive: Not Supported 00:31:50.385 Format NVM: Not Supported 00:31:50.385 Firmware Activate/Download: Not Supported 00:31:50.385 Namespace Management: Not Supported 00:31:50.385 Device Self-Test: Not Supported 00:31:50.385 Directives: Not Supported 00:31:50.385 NVMe-MI: Not Supported 00:31:50.385 Virtualization Management: Not Supported 00:31:50.385 Doorbell Buffer Config: Not Supported 00:31:50.385 Get LBA Status Capability: Not Supported 00:31:50.385 Command & Feature Lockdown Capability: Not Supported 00:31:50.385 Abort Command Limit: 4 00:31:50.385 Async Event Request Limit: 4 00:31:50.385 Number of Firmware Slots: N/A 00:31:50.385 Firmware Slot 1 Read-Only: N/A 00:31:50.385 Firmware Activation Without Reset: N/A 00:31:50.385 Multiple Update Detection Support: N/A 00:31:50.385 Firmware Update Granularity: No Information Provided 00:31:50.385 Per-Namespace SMART Log: Yes 00:31:50.385 Asymmetric Namespace Access Log Page: Supported 00:31:50.385 ANA Transition Time : 10 sec 00:31:50.385 00:31:50.385 Asymmetric Namespace Access Capabilities 00:31:50.385 ANA Optimized State : Supported 00:31:50.385 ANA Non-Optimized State : Supported 00:31:50.385 ANA Inaccessible State : Supported 00:31:50.385 ANA Persistent Loss State : Supported 00:31:50.385 ANA Change State : Supported 00:31:50.385 ANAGRPID is not changed : No 00:31:50.385 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:50.385 00:31:50.385 ANA Group Identifier Maximum : 128 00:31:50.385 Number of ANA Group Identifiers : 128 00:31:50.385 Max Number of Allowed Namespaces : 1024 00:31:50.385 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:50.385 Command Effects Log Page: Supported 00:31:50.385 Get Log Page Extended Data: Supported 00:31:50.385 Telemetry Log Pages: Not Supported 00:31:50.385 Persistent Event Log Pages: Not Supported 00:31:50.385 Supported Log Pages Log Page: May Support 00:31:50.385 Commands Supported & Effects Log Page: Not Supported 00:31:50.385 Feature Identifiers & Effects Log Page:May Support 00:31:50.385 NVMe-MI Commands & Effects Log Page: May Support 00:31:50.385 Data Area 4 for Telemetry Log: Not Supported 00:31:50.385 Error Log Page Entries Supported: 128 00:31:50.385 Keep Alive: Supported 00:31:50.385 Keep Alive Granularity: 1000 ms 00:31:50.385 00:31:50.385 NVM Command Set Attributes 00:31:50.385 ========================== 00:31:50.385 Submission Queue Entry Size 00:31:50.385 Max: 64 00:31:50.385 Min: 64 00:31:50.385 Completion Queue Entry Size 00:31:50.385 Max: 16 00:31:50.385 Min: 16 00:31:50.385 Number of Namespaces: 1024 00:31:50.385 Compare Command: Not Supported 00:31:50.385 Write Uncorrectable Command: Not Supported 00:31:50.385 Dataset Management Command: Supported 00:31:50.385 Write Zeroes Command: Supported 00:31:50.385 Set Features Save Field: Not Supported 00:31:50.385 Reservations: Not Supported 00:31:50.385 Timestamp: Not Supported 00:31:50.385 Copy: Not Supported 00:31:50.385 Volatile Write Cache: Present 00:31:50.385 Atomic Write Unit (Normal): 1 00:31:50.385 Atomic Write Unit (PFail): 1 00:31:50.385 Atomic Compare & Write Unit: 1 00:31:50.385 Fused Compare & Write: Not Supported 00:31:50.385 Scatter-Gather List 00:31:50.385 SGL Command Set: Supported 00:31:50.385 SGL Keyed: Not Supported 00:31:50.385 SGL Bit Bucket Descriptor: Not Supported 00:31:50.385 SGL Metadata Pointer: Not Supported 00:31:50.385 Oversized SGL: Not Supported 00:31:50.385 SGL Metadata Address: Not Supported 00:31:50.385 SGL Offset: Supported 00:31:50.385 Transport SGL Data Block: Not Supported 00:31:50.385 Replay Protected Memory Block: Not Supported 00:31:50.385 00:31:50.385 Firmware Slot Information 00:31:50.385 ========================= 00:31:50.385 Active slot: 0 00:31:50.385 00:31:50.385 Asymmetric Namespace Access 00:31:50.385 =========================== 00:31:50.385 Change Count : 0 00:31:50.385 Number of ANA Group Descriptors : 1 00:31:50.385 ANA Group Descriptor : 0 00:31:50.385 ANA Group ID : 1 00:31:50.385 Number of NSID Values : 1 00:31:50.385 Change Count : 0 00:31:50.385 ANA State : 1 00:31:50.385 Namespace Identifier : 1 00:31:50.385 00:31:50.385 Commands Supported and Effects 00:31:50.385 ============================== 00:31:50.385 Admin Commands 00:31:50.385 -------------- 00:31:50.385 Get Log Page (02h): Supported 00:31:50.385 Identify (06h): Supported 00:31:50.385 Abort (08h): Supported 00:31:50.385 Set Features (09h): Supported 00:31:50.385 Get Features (0Ah): Supported 00:31:50.385 Asynchronous Event Request (0Ch): Supported 00:31:50.386 Keep Alive (18h): Supported 00:31:50.386 I/O Commands 00:31:50.386 ------------ 00:31:50.386 Flush (00h): Supported 00:31:50.386 Write (01h): Supported LBA-Change 00:31:50.386 Read (02h): Supported 00:31:50.386 Write Zeroes (08h): Supported LBA-Change 00:31:50.386 Dataset Management (09h): Supported 00:31:50.386 00:31:50.386 Error Log 00:31:50.386 ========= 00:31:50.386 Entry: 0 00:31:50.386 Error Count: 0x3 00:31:50.386 Submission Queue Id: 0x0 00:31:50.386 Command Id: 0x5 00:31:50.386 Phase Bit: 0 00:31:50.386 Status Code: 0x2 00:31:50.386 Status Code Type: 0x0 00:31:50.386 Do Not Retry: 1 00:31:50.386 Error Location: 0x28 00:31:50.386 LBA: 0x0 00:31:50.386 Namespace: 0x0 00:31:50.386 Vendor Log Page: 0x0 00:31:50.386 ----------- 00:31:50.386 Entry: 1 00:31:50.386 Error Count: 0x2 00:31:50.386 Submission Queue Id: 0x0 00:31:50.386 Command Id: 0x5 00:31:50.386 Phase Bit: 0 00:31:50.386 Status Code: 0x2 00:31:50.386 Status Code Type: 0x0 00:31:50.386 Do Not Retry: 1 00:31:50.386 Error Location: 0x28 00:31:50.386 LBA: 0x0 00:31:50.386 Namespace: 0x0 00:31:50.386 Vendor Log Page: 0x0 00:31:50.386 ----------- 00:31:50.386 Entry: 2 00:31:50.386 Error Count: 0x1 00:31:50.386 Submission Queue Id: 0x0 00:31:50.386 Command Id: 0x4 00:31:50.386 Phase Bit: 0 00:31:50.386 Status Code: 0x2 00:31:50.386 Status Code Type: 0x0 00:31:50.386 Do Not Retry: 1 00:31:50.386 Error Location: 0x28 00:31:50.386 LBA: 0x0 00:31:50.386 Namespace: 0x0 00:31:50.386 Vendor Log Page: 0x0 00:31:50.386 00:31:50.386 Number of Queues 00:31:50.386 ================ 00:31:50.386 Number of I/O Submission Queues: 128 00:31:50.386 Number of I/O Completion Queues: 128 00:31:50.386 00:31:50.386 ZNS Specific Controller Data 00:31:50.386 ============================ 00:31:50.386 Zone Append Size Limit: 0 00:31:50.386 00:31:50.386 00:31:50.386 Active Namespaces 00:31:50.386 ================= 00:31:50.386 get_feature(0x05) failed 00:31:50.386 Namespace ID:1 00:31:50.386 Command Set Identifier: NVM (00h) 00:31:50.386 Deallocate: Supported 00:31:50.386 Deallocated/Unwritten Error: Not Supported 00:31:50.386 Deallocated Read Value: Unknown 00:31:50.386 Deallocate in Write Zeroes: Not Supported 00:31:50.386 Deallocated Guard Field: 0xFFFF 00:31:50.386 Flush: Supported 00:31:50.386 Reservation: Not Supported 00:31:50.386 Namespace Sharing Capabilities: Multiple Controllers 00:31:50.386 Size (in LBAs): 1953525168 (931GiB) 00:31:50.386 Capacity (in LBAs): 1953525168 (931GiB) 00:31:50.386 Utilization (in LBAs): 1953525168 (931GiB) 00:31:50.386 UUID: 74b1fee6-fccf-40f7-aafe-9cc2fed9bc3a 00:31:50.386 Thin Provisioning: Not Supported 00:31:50.386 Per-NS Atomic Units: Yes 00:31:50.386 Atomic Boundary Size (Normal): 0 00:31:50.386 Atomic Boundary Size (PFail): 0 00:31:50.386 Atomic Boundary Offset: 0 00:31:50.386 NGUID/EUI64 Never Reused: No 00:31:50.386 ANA group ID: 1 00:31:50.386 Namespace Write Protected: No 00:31:50.386 Number of LBA Formats: 1 00:31:50.386 Current LBA Format: LBA Format #00 00:31:50.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:50.386 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:50.386 15:43:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:50.386 rmmod nvme_tcp 00:31:50.386 rmmod nvme_fabrics 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.386 15:43:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.300 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:52.300 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:52.300 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:52.300 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:52.557 15:43:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:53.492 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:53.492 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:53.751 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:54.689 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:54.689 00:31:54.689 real 0m9.394s 00:31:54.689 user 0m1.992s 00:31:54.689 sys 0m3.345s 00:31:54.689 15:43:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.689 15:43:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.689 ************************************ 00:31:54.689 END TEST nvmf_identify_kernel_target 00:31:54.689 ************************************ 00:31:54.689 15:43:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:54.689 15:43:25 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:54.689 15:43:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:54.689 15:43:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:54.689 15:43:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.689 ************************************ 00:31:54.689 START TEST nvmf_auth_host 00:31:54.689 ************************************ 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:54.689 * Looking for test storage... 00:31:54.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.689 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:54.946 15:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:54.947 15:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:56.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:56.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:56.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:56.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:56.848 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:56.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:31:56.849 00:31:56.849 --- 10.0.0.2 ping statistics --- 00:31:56.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.849 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:31:56.849 00:31:56.849 --- 10.0.0.1 ping statistics --- 00:31:56.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.849 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1242822 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1242822 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1242822 ']' 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:56.849 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.415 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d35f798487ea47739416bdd06634fa5 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dMF 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d35f798487ea47739416bdd06634fa5 0 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d35f798487ea47739416bdd06634fa5 0 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d35f798487ea47739416bdd06634fa5 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dMF 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dMF 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dMF 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3b01e581acedfc3c86e2f7d82d0ed1ba2919d884e647b68b7f232b673574cf08 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jij 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3b01e581acedfc3c86e2f7d82d0ed1ba2919d884e647b68b7f232b673574cf08 3 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3b01e581acedfc3c86e2f7d82d0ed1ba2919d884e647b68b7f232b673574cf08 3 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3b01e581acedfc3c86e2f7d82d0ed1ba2919d884e647b68b7f232b673574cf08 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jij 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jij 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.jij 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed5421712ba8b7a6f2c71b7c01c197a4ef62537f94316104 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5dj 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed5421712ba8b7a6f2c71b7c01c197a4ef62537f94316104 0 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed5421712ba8b7a6f2c71b7c01c197a4ef62537f94316104 0 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed5421712ba8b7a6f2c71b7c01c197a4ef62537f94316104 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:57.416 15:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5dj 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5dj 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5dj 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=74798160ad625d78253a7db5dfa2734b1fc803dc21dda4ae 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YQ1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 74798160ad625d78253a7db5dfa2734b1fc803dc21dda4ae 2 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 74798160ad625d78253a7db5dfa2734b1fc803dc21dda4ae 2 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=74798160ad625d78253a7db5dfa2734b1fc803dc21dda4ae 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YQ1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YQ1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.YQ1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=10789df1b02b11602b3c9d6cbbaa5dbe 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aq8 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 10789df1b02b11602b3c9d6cbbaa5dbe 1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 10789df1b02b11602b3c9d6cbbaa5dbe 1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=10789df1b02b11602b3c9d6cbbaa5dbe 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aq8 00:31:57.416 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aq8 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aq8 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6786fe745baa455ccfb1ee667f02763 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kfK 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6786fe745baa455ccfb1ee667f02763 1 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6786fe745baa455ccfb1ee667f02763 1 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6786fe745baa455ccfb1ee667f02763 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:57.417 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.674 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kfK 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kfK 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kfK 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=308677ad4aee40f9c43ad2b47618ce02cecea1398ff42570 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.449 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 308677ad4aee40f9c43ad2b47618ce02cecea1398ff42570 2 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 308677ad4aee40f9c43ad2b47618ce02cecea1398ff42570 2 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=308677ad4aee40f9c43ad2b47618ce02cecea1398ff42570 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.449 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.449 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.449 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b119dce601256c85f9a01e844b3ee5cf 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GXO 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b119dce601256c85f9a01e844b3ee5cf 0 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b119dce601256c85f9a01e844b3ee5cf 0 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b119dce601256c85f9a01e844b3ee5cf 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GXO 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GXO 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GXO 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ffd70e1df54dd67d01c3a5fca6fb2dcc1189e108f667799d747c40b166a8ca4a 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.L2U 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ffd70e1df54dd67d01c3a5fca6fb2dcc1189e108f667799d747c40b166a8ca4a 3 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ffd70e1df54dd67d01c3a5fca6fb2dcc1189e108f667799d747c40b166a8ca4a 3 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ffd70e1df54dd67d01c3a5fca6fb2dcc1189e108f667799d747c40b166a8ca4a 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.L2U 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.L2U 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.L2U 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1242822 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1242822 ']' 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:57.675 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dMF 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.jij ]] 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jij 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5dj 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.YQ1 ]] 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YQ1 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.933 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aq8 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kfK ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kfK 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.449 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GXO ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GXO 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.L2U 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:57.934 15:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:59.310 Waiting for block devices as requested 00:31:59.310 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:59.310 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:59.310 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:59.569 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:59.569 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:59.569 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:59.569 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:59.829 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:59.829 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:59.829 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:00.088 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:00.088 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:00.088 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:00.088 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:00.348 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:00.348 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:00.348 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:00.915 No valid GPT data, bailing 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:00.915 00:32:00.915 Discovery Log Number of Records 2, Generation counter 2 00:32:00.915 =====Discovery Log Entry 0====== 00:32:00.915 trtype: tcp 00:32:00.915 adrfam: ipv4 00:32:00.915 subtype: current discovery subsystem 00:32:00.915 treq: not specified, sq flow control disable supported 00:32:00.915 portid: 1 00:32:00.915 trsvcid: 4420 00:32:00.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:00.915 traddr: 10.0.0.1 00:32:00.915 eflags: none 00:32:00.915 sectype: none 00:32:00.915 =====Discovery Log Entry 1====== 00:32:00.915 trtype: tcp 00:32:00.915 adrfam: ipv4 00:32:00.915 subtype: nvme subsystem 00:32:00.915 treq: not specified, sq flow control disable supported 00:32:00.915 portid: 1 00:32:00.915 trsvcid: 4420 00:32:00.915 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:00.915 traddr: 10.0.0.1 00:32:00.915 eflags: none 00:32:00.915 sectype: none 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.915 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.916 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.916 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.916 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.174 nvme0n1 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.174 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.433 nvme0n1 00:32:01.433 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.433 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.433 15:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.433 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.433 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.433 15:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.433 nvme0n1 00:32:01.433 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:01.691 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.692 nvme0n1 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.692 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.952 nvme0n1 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:01.952 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 nvme0n1 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.220 15:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.480 nvme0n1 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.480 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.481 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.481 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.481 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.481 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.481 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.481 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.740 nvme0n1 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.740 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.032 nvme0n1 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.032 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.290 nvme0n1 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.290 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.291 15:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.550 nvme0n1 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:03.550 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.551 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.810 nvme0n1 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.810 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.070 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.330 nvme0n1 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.330 15:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.588 nvme0n1 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.588 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.589 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.847 nvme0n1 00:32:04.847 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.847 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.847 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.847 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.847 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.107 15:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.108 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:05.108 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.108 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.367 nvme0n1 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.367 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:05.368 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:05.368 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:05.368 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:05.368 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.368 15:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.368 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.934 nvme0n1 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.934 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.935 15:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.499 nvme0n1 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.499 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.500 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.500 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.500 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.065 nvme0n1 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.065 15:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.324 15:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.324 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.324 15:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 nvme0n1 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:07.890 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.891 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.457 nvme0n1 00:32:08.457 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.457 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.457 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.457 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.457 15:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.457 15:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.457 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.458 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.392 nvme0n1 00:32:09.393 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.393 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.393 15:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.393 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.393 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.393 15:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.393 15:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.327 nvme0n1 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.327 15:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.702 nvme0n1 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.702 15:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.641 nvme0n1 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.641 15:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.572 nvme0n1 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:13.572 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.573 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.830 nvme0n1 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:13.830 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.831 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.089 nvme0n1 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.089 nvme0n1 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.089 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.347 15:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.347 nvme0n1 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.347 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.605 nvme0n1 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:14.605 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.606 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.863 nvme0n1 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.863 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.121 nvme0n1 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.121 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.379 15:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.379 nvme0n1 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.379 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.636 nvme0n1 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.636 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.894 nvme0n1 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.894 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.152 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.410 nvme0n1 00:32:16.410 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.410 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.410 15:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.410 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.410 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.410 15:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:16.410 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.411 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.747 nvme0n1 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.747 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.006 nvme0n1 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.006 15:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.572 nvme0n1 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:17.572 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.573 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.829 nvme0n1 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.829 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.830 15:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.393 nvme0n1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.393 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.956 nvme0n1 00:32:18.956 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.956 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.956 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.956 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.956 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.956 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.957 15:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.520 nvme0n1 00:32:19.520 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.520 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.520 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.520 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.520 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.520 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.776 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.339 nvme0n1 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.339 15:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.596 nvme0n1 00:32:20.596 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.853 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.854 15:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.783 nvme0n1 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.783 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.784 15:43:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.716 nvme0n1 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.716 15:43:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.673 nvme0n1 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.673 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.674 15:43:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.048 nvme0n1 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.048 15:43:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.983 nvme0n1 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.983 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.984 nvme0n1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.984 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.242 nvme0n1 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:26.242 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.243 15:43:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.501 nvme0n1 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.501 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.502 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.760 nvme0n1 00:32:26.760 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.760 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.760 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.760 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.760 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.761 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.020 nvme0n1 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.020 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 nvme0n1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.279 15:43:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.537 nvme0n1 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.537 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.538 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.538 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.538 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.538 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.538 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.538 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.797 nvme0n1 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.797 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.055 nvme0n1 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.055 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.056 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.056 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.056 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.313 nvme0n1 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:28.313 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.314 15:43:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.572 nvme0n1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.572 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.830 nvme0n1 00:32:28.830 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.830 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.830 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.830 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.830 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.830 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.087 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.088 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.088 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.345 nvme0n1 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.345 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.346 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.346 15:43:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.346 15:43:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.346 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.346 15:43:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.603 nvme0n1 00:32:29.603 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.603 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.603 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.603 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.603 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.604 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.169 nvme0n1 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.169 15:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.763 nvme0n1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.763 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.327 nvme0n1 00:32:31.327 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.327 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.327 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.327 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.327 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.328 15:44:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.892 nvme0n1 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.892 15:44:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.458 nvme0n1 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.458 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.021 nvme0n1 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzNWY3OTg0ODdlYTQ3NzM5NDE2YmRkMDY2MzRmYTW1UiOm: 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2IwMWU1ODFhY2VkZmMzYzg2ZTJmN2Q4MmQwZWQxYmEyOTE5ZDg4NGU2NDdiNjhiN2YyMzJiNjczNTc0Y2YwOEuBIBo=: 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.021 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.022 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.022 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.022 15:44:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.022 15:44:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.022 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.022 15:44:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.956 nvme0n1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.956 15:44:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.889 nvme0n1 00:32:34.889 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.889 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.889 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.889 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.890 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.890 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.148 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.148 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.148 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.148 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA3ODlkZjFiMDJiMTE2MDJiM2M5ZDZjYmJhYTVkYmWQZ/2y: 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: ]] 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjY3ODZmZTc0NWJhYTQ1NWNjZmIxZWU2NjdmMDI3NjNe+T2N: 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.149 15:44:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.084 nvme0n1 00:32:36.084 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.084 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.084 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA4Njc3YWQ0YWVlNDBmOWM0M2FkMmI0NzYxOGNlMDJjZWNlYTEzOThmZjQyNTcwCH0a5w==: 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjExOWRjZTYwMTI1NmM4NWY5YTAxZTg0NGIzZWU1Y2YC69F+: 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.085 15:44:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.019 nvme0n1 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.019 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmZkNzBlMWRmNTRkZDY3ZDAxYzNhNWZjYTZmYjJkY2MxMTg5ZTEwOGY2Njc3OTlkNzQ3YzQwYjE2NmE4Y2E0YU31CyQ=: 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.020 15:44:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.952 nvme0n1 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.952 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ1NDIxNzEyYmE4YjdhNmYyYzcxYjdjMDFjMTk3YTRlZjYyNTM3Zjk0MzE2MTA0wG+ACA==: 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: ]] 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzQ3OTgxNjBhZDYyNWQ3ODI1M2E3ZGI1ZGZhMjczNGIxZmM4MDNkYzIxZGRhNGFlbrJ28g==: 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.953 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.210 request: 00:32:38.210 { 00:32:38.210 "name": "nvme0", 00:32:38.210 "trtype": "tcp", 00:32:38.210 "traddr": "10.0.0.1", 00:32:38.210 "adrfam": "ipv4", 00:32:38.210 "trsvcid": "4420", 00:32:38.210 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:38.210 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:38.210 "prchk_reftag": false, 00:32:38.210 "prchk_guard": false, 00:32:38.210 "hdgst": false, 00:32:38.210 "ddgst": false, 00:32:38.210 "method": "bdev_nvme_attach_controller", 00:32:38.210 "req_id": 1 00:32:38.210 } 00:32:38.210 Got JSON-RPC error response 00:32:38.210 response: 00:32:38.210 { 00:32:38.210 "code": -5, 00:32:38.210 "message": "Input/output error" 00:32:38.210 } 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.210 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.211 request: 00:32:38.211 { 00:32:38.211 "name": "nvme0", 00:32:38.211 "trtype": "tcp", 00:32:38.211 "traddr": "10.0.0.1", 00:32:38.211 "adrfam": "ipv4", 00:32:38.211 "trsvcid": "4420", 00:32:38.211 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:38.211 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:38.211 "prchk_reftag": false, 00:32:38.211 "prchk_guard": false, 00:32:38.211 "hdgst": false, 00:32:38.211 "ddgst": false, 00:32:38.211 "dhchap_key": "key2", 00:32:38.211 "method": "bdev_nvme_attach_controller", 00:32:38.211 "req_id": 1 00:32:38.211 } 00:32:38.211 Got JSON-RPC error response 00:32:38.211 response: 00:32:38.211 { 00:32:38.211 "code": -5, 00:32:38.211 "message": "Input/output error" 00:32:38.211 } 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.211 15:44:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.469 request: 00:32:38.469 { 00:32:38.469 "name": "nvme0", 00:32:38.469 "trtype": "tcp", 00:32:38.469 "traddr": "10.0.0.1", 00:32:38.469 "adrfam": "ipv4", 00:32:38.469 "trsvcid": "4420", 00:32:38.469 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:38.469 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:38.469 "prchk_reftag": false, 00:32:38.469 "prchk_guard": false, 00:32:38.469 "hdgst": false, 00:32:38.469 "ddgst": false, 00:32:38.469 "dhchap_key": "key1", 00:32:38.469 "dhchap_ctrlr_key": "ckey2", 00:32:38.469 "method": "bdev_nvme_attach_controller", 00:32:38.469 "req_id": 1 00:32:38.469 } 00:32:38.469 Got JSON-RPC error response 00:32:38.469 response: 00:32:38.469 { 00:32:38.469 "code": -5, 00:32:38.469 "message": "Input/output error" 00:32:38.469 } 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:38.469 rmmod nvme_tcp 00:32:38.469 rmmod nvme_fabrics 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1242822 ']' 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1242822 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1242822 ']' 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1242822 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1242822 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1242822' 00:32:38.469 killing process with pid 1242822 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1242822 00:32:38.469 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1242822 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.728 15:44:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:40.634 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:40.635 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:40.635 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:40.635 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:40.635 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:40.892 15:44:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:41.827 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:41.827 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:41.827 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:42.085 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:43.017 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:43.017 15:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dMF /tmp/spdk.key-null.5dj /tmp/spdk.key-sha256.aq8 /tmp/spdk.key-sha384.449 /tmp/spdk.key-sha512.L2U /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:43.017 15:44:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:43.951 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:43.951 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:43.951 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:43.951 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:43.951 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:44.209 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:44.209 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:44.209 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:44.209 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:44.209 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:44.209 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:44.209 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:44.209 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:44.209 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:44.209 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:44.209 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:44.209 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:44.209 00:32:44.209 real 0m49.502s 00:32:44.209 user 0m47.218s 00:32:44.209 sys 0m5.730s 00:32:44.209 15:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:44.209 15:44:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.209 ************************************ 00:32:44.209 END TEST nvmf_auth_host 00:32:44.209 ************************************ 00:32:44.209 15:44:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:44.209 15:44:14 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:32:44.209 15:44:14 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:44.209 15:44:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:44.209 15:44:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:44.209 15:44:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:44.209 ************************************ 00:32:44.209 START TEST nvmf_digest 00:32:44.209 ************************************ 00:32:44.209 15:44:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:44.467 * Looking for test storage... 00:32:44.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.467 15:44:15 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:44.468 15:44:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:46.421 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:46.422 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:46.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:46.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:46.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:32:46.422 00:32:46.422 --- 10.0.0.2 ping statistics --- 00:32:46.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.422 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:32:46.422 00:32:46.422 --- 10.0.0.1 ping statistics --- 00:32:46.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.422 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:46.422 ************************************ 00:32:46.422 START TEST nvmf_digest_clean 00:32:46.422 ************************************ 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1252238 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1252238 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1252238 ']' 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.422 15:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.422 [2024-07-13 15:44:17.043446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:46.422 [2024-07-13 15:44:17.043522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.422 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.423 [2024-07-13 15:44:17.081702] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:46.423 [2024-07-13 15:44:17.108841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.681 [2024-07-13 15:44:17.192853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.681 [2024-07-13 15:44:17.192920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.681 [2024-07-13 15:44:17.192935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.681 [2024-07-13 15:44:17.192946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.681 [2024-07-13 15:44:17.192963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.681 [2024-07-13 15:44:17.192987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.681 null0 00:32:46.681 [2024-07-13 15:44:17.379294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.681 [2024-07-13 15:44:17.403477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1252303 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1252303 /var/tmp/bperf.sock 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1252303 ']' 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:46.681 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:46.940 [2024-07-13 15:44:17.454411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:46.940 [2024-07-13 15:44:17.454483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252303 ] 00:32:46.940 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.940 [2024-07-13 15:44:17.486207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:46.940 [2024-07-13 15:44:17.515777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.940 [2024-07-13 15:44:17.606061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.940 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.940 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:46.940 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:46.940 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:46.940 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:47.507 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.507 15:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.765 nvme0n1 00:32:47.765 15:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:47.765 15:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:48.023 Running I/O for 2 seconds... 00:32:49.923 00:32:49.923 Latency(us) 00:32:49.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.923 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:49.923 nvme0n1 : 2.01 19268.73 75.27 0.00 0.00 6633.12 3495.25 13981.01 00:32:49.923 =================================================================================================================== 00:32:49.923 Total : 19268.73 75.27 0.00 0.00 6633.12 3495.25 13981.01 00:32:49.923 0 00:32:49.923 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:49.923 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:49.923 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:49.923 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:49.923 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:49.923 | select(.opcode=="crc32c") 00:32:49.923 | "\(.module_name) \(.executed)"' 00:32:50.180 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1252303 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1252303 ']' 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1252303 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252303 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252303' 00:32:50.181 killing process with pid 1252303 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1252303 00:32:50.181 Received shutdown signal, test time was about 2.000000 seconds 00:32:50.181 00:32:50.181 Latency(us) 00:32:50.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.181 =================================================================================================================== 00:32:50.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:50.181 15:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1252303 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1252713 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1252713 /var/tmp/bperf.sock 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1252713 ']' 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:50.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:50.439 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:50.439 [2024-07-13 15:44:21.126750] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:50.439 [2024-07-13 15:44:21.126827] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252713 ] 00:32:50.439 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:50.439 Zero copy mechanism will not be used. 00:32:50.439 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.439 [2024-07-13 15:44:21.158058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:50.439 [2024-07-13 15:44:21.185972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.696 [2024-07-13 15:44:21.271124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.696 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:50.696 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:50.696 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:50.696 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:50.696 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:50.954 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.954 15:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.518 nvme0n1 00:32:51.518 15:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:51.518 15:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.518 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:51.518 Zero copy mechanism will not be used. 00:32:51.518 Running I/O for 2 seconds... 00:32:54.044 00:32:54.044 Latency(us) 00:32:54.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.044 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:54.044 nvme0n1 : 2.01 3071.72 383.97 0.00 0.00 5204.09 4903.06 14757.74 00:32:54.044 =================================================================================================================== 00:32:54.044 Total : 3071.72 383.97 0.00 0.00 5204.09 4903.06 14757.74 00:32:54.045 0 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:54.045 | select(.opcode=="crc32c") 00:32:54.045 | "\(.module_name) \(.executed)"' 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1252713 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1252713 ']' 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1252713 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252713 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252713' 00:32:54.045 killing process with pid 1252713 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1252713 00:32:54.045 Received shutdown signal, test time was about 2.000000 seconds 00:32:54.045 00:32:54.045 Latency(us) 00:32:54.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.045 =================================================================================================================== 00:32:54.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1252713 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1253113 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1253113 /var/tmp/bperf.sock 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1253113 ']' 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:54.045 15:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.303 [2024-07-13 15:44:24.846428] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:54.303 [2024-07-13 15:44:24.846507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253113 ] 00:32:54.303 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.303 [2024-07-13 15:44:24.879178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:54.303 [2024-07-13 15:44:24.911780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.303 [2024-07-13 15:44:25.007102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.303 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:54.303 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:54.303 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:54.303 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:54.303 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:54.869 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:54.869 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:55.127 nvme0n1 00:32:55.127 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:55.127 15:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:55.385 Running I/O for 2 seconds... 00:32:57.282 00:32:57.282 Latency(us) 00:32:57.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.282 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.282 nvme0n1 : 2.01 20525.38 80.18 0.00 0.00 6225.99 3495.25 14078.10 00:32:57.283 =================================================================================================================== 00:32:57.283 Total : 20525.38 80.18 0.00 0.00 6225.99 3495.25 14078.10 00:32:57.283 0 00:32:57.283 15:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:57.283 15:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:57.283 15:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:57.283 15:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:57.283 15:44:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:57.283 | select(.opcode=="crc32c") 00:32:57.283 | "\(.module_name) \(.executed)"' 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1253113 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1253113 ']' 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1253113 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253113 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253113' 00:32:57.541 killing process with pid 1253113 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1253113 00:32:57.541 Received shutdown signal, test time was about 2.000000 seconds 00:32:57.541 00:32:57.541 Latency(us) 00:32:57.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.541 =================================================================================================================== 00:32:57.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.541 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1253113 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1253642 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1253642 /var/tmp/bperf.sock 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1253642 ']' 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.799 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.799 [2024-07-13 15:44:28.488324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:32:57.800 [2024-07-13 15:44:28.488409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253642 ] 00:32:57.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:57.800 Zero copy mechanism will not be used. 00:32:57.800 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.800 [2024-07-13 15:44:28.522095] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:57.800 [2024-07-13 15:44:28.554910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.058 [2024-07-13 15:44:28.653983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.058 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.058 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:58.058 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:58.058 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:58.058 15:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:58.624 15:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.624 15:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.882 nvme0n1 00:32:58.882 15:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:58.882 15:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.882 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:58.882 Zero copy mechanism will not be used. 00:32:58.882 Running I/O for 2 seconds... 00:33:01.411 00:33:01.411 Latency(us) 00:33:01.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.411 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:01.411 nvme0n1 : 2.01 2227.26 278.41 0.00 0.00 7166.58 5412.79 13883.92 00:33:01.411 =================================================================================================================== 00:33:01.411 Total : 2227.26 278.41 0.00 0.00 7166.58 5412.79 13883.92 00:33:01.411 0 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:01.411 | select(.opcode=="crc32c") 00:33:01.411 | "\(.module_name) \(.executed)"' 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1253642 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1253642 ']' 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1253642 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1253642 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1253642' 00:33:01.411 killing process with pid 1253642 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1253642 00:33:01.411 Received shutdown signal, test time was about 2.000000 seconds 00:33:01.411 00:33:01.411 Latency(us) 00:33:01.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.411 =================================================================================================================== 00:33:01.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.411 15:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1253642 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1252238 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1252238 ']' 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1252238 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252238 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252238' 00:33:01.411 killing process with pid 1252238 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1252238 00:33:01.411 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1252238 00:33:01.671 00:33:01.671 real 0m15.360s 00:33:01.671 user 0m30.896s 00:33:01.671 sys 0m3.884s 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 ************************************ 00:33:01.671 END TEST nvmf_digest_clean 00:33:01.671 ************************************ 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 ************************************ 00:33:01.671 START TEST nvmf_digest_error 00:33:01.671 ************************************ 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1254080 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1254080 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1254080 ']' 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.671 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.988 [2024-07-13 15:44:32.456647] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:01.988 [2024-07-13 15:44:32.456731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.988 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.988 [2024-07-13 15:44:32.493601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:01.988 [2024-07-13 15:44:32.525681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.988 [2024-07-13 15:44:32.615599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.988 [2024-07-13 15:44:32.615668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.988 [2024-07-13 15:44:32.615694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.988 [2024-07-13 15:44:32.615725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.988 [2024-07-13 15:44:32.615745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.988 [2024-07-13 15:44:32.615785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.988 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.988 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:01.988 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:01.988 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:01.988 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.989 [2024-07-13 15:44:32.688481] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.989 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.262 null0 00:33:02.262 [2024-07-13 15:44:32.804480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.262 [2024-07-13 15:44:32.828689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1254174 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1254174 /var/tmp/bperf.sock 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1254174 ']' 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:02.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:02.262 15:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.262 [2024-07-13 15:44:32.879594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:02.262 [2024-07-13 15:44:32.879674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254174 ] 00:33:02.262 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.262 [2024-07-13 15:44:32.917456] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:02.262 [2024-07-13 15:44:32.947861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.519 [2024-07-13 15:44:33.043734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.519 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:02.519 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:02.519 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:02.519 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:02.796 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:02.796 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.796 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:02.796 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.796 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.796 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:03.054 nvme0n1 00:33:03.054 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:03.054 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.054 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:03.054 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.054 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:03.054 15:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:03.312 Running I/O for 2 seconds... 00:33:03.312 [2024-07-13 15:44:33.875613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.875664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.875686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.894575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.894612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.894633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.909874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.909911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.909953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.926095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.926143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.926160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.938024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.938056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.938074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.952233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.952287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.966358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.966393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.966412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.979456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.979491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.979510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:33.993522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:33.993556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:33.993575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:34.009295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:34.009328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:34.009349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:34.021309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:34.021342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:34.021361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:34.037299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:34.037341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:34.037361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:34.049333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:34.049365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:34.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:34.063009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:34.063053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:34.063070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.312 [2024-07-13 15:44:34.076832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.312 [2024-07-13 15:44:34.076863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.312 [2024-07-13 15:44:34.076908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.088583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.088616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.088634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.104007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.104051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.104068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.118891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.118937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.118954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.132137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.132183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.132203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.145125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.145165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.145182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.156846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.156913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.156930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.170997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.171027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.171045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.185687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.185721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.185741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.199659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.199692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.199712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.213080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.213110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.213127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.228610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.228644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.228663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.240638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.240672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.240690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.254911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.254941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.254958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.271620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.271654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.271679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.282680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.282714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.282732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.300133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.300181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.300200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.313948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.313993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.314010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.570 [2024-07-13 15:44:34.325738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.570 [2024-07-13 15:44:34.325771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.570 [2024-07-13 15:44:34.325790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.339989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.340017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.340033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.353215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.353264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.353283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.365882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.365930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.365946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.382500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.382534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.382553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.395975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.396005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.396023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.408113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.408155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.408171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.423083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.423112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.838 [2024-07-13 15:44:34.423128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.838 [2024-07-13 15:44:34.436934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.838 [2024-07-13 15:44:34.436964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.436981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.451577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.451610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.451629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.462942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.462968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.462983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.477087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.477115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.477147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.493835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.493879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.493901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.505460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.505495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.505520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.520132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.520177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.520197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.533460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.533493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.533512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.547358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.547391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.547410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.561418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.561452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.561470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.573430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.573463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.573482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.839 [2024-07-13 15:44:34.587959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:03.839 [2024-07-13 15:44:34.587989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.839 [2024-07-13 15:44:34.588007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.601539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.601573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.601592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.614241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.614274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.614293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.628894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.628944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.628962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.641833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.641874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.641911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.656103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.656134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.656152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.668168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.668202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.668221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.682527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.682560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.682579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.696675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.696708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.696726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.708951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.708977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.708993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.722741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.722775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.722795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.739496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.739532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.739551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.751521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.751555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.751574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.767497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.767531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.767550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.778840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.778882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.778916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.794466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.794499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.098 [2024-07-13 15:44:34.794518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.098 [2024-07-13 15:44:34.805557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.098 [2024-07-13 15:44:34.805590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.099 [2024-07-13 15:44:34.805609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.099 [2024-07-13 15:44:34.820348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.099 [2024-07-13 15:44:34.820382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.099 [2024-07-13 15:44:34.820401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.099 [2024-07-13 15:44:34.835834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.099 [2024-07-13 15:44:34.835875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.099 [2024-07-13 15:44:34.835896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.099 [2024-07-13 15:44:34.848504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.099 [2024-07-13 15:44:34.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.099 [2024-07-13 15:44:34.848557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.099 [2024-07-13 15:44:34.861384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.099 [2024-07-13 15:44:34.861417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.099 [2024-07-13 15:44:34.861442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.874603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.874636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.874655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.890248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.890281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.890299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.904521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.904554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.904573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.916394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.916427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.932392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.932425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.932444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.947022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.947052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.947070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.959221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.959254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.959273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.974027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.974055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:34.988524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:34.988564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.355 [2024-07-13 15:44:34.988583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.355 [2024-07-13 15:44:35.000845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.355 [2024-07-13 15:44:35.000888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.000931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.015981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.016012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.016029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.028070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.028100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.028117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.042589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.042623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.042642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.055574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.055621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.055640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.068819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.068849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.068874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.082461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.082495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.082513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.096123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.096166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.096181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.356 [2024-07-13 15:44:35.110859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.356 [2024-07-13 15:44:35.110915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.356 [2024-07-13 15:44:35.110933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.123781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.123815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.123833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.138528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.138562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.138581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.150568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.150603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.163979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.164009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.164025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.178099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.178129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.178146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.190509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.190543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.190561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.204558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.204591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.204610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.217849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.217889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.217928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.229741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.229774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.613 [2024-07-13 15:44:35.229793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.613 [2024-07-13 15:44:35.244023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.613 [2024-07-13 15:44:35.244053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.244070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.258166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.258228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.272653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.272687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.272706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.283800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.283833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.283852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.298495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.298529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.298549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.313162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.313207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.313226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.327128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.327159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.327176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.338627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.338660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.338679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.351959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.351988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.352004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.614 [2024-07-13 15:44:35.367548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.614 [2024-07-13 15:44:35.367581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.614 [2024-07-13 15:44:35.367601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.381842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.381886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.381922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.393660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.393694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.393713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.407620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.407653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.407671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.420995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.421025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.421042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.435538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.435572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.435590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.449156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.449204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.449229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.462123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.871 [2024-07-13 15:44:35.462188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.871 [2024-07-13 15:44:35.474975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.871 [2024-07-13 15:44:35.475004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.475021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.488545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.488578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.488597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.502330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.502362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.502381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.516150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.516197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.516216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.528583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.528616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.528634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.542403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.542437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.542457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.555721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.555755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.555774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.569752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.569791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.569811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.584842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.584885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.584906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.596276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.596309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.596328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.611434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.611468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.611486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:04.872 [2024-07-13 15:44:35.626631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:04.872 [2024-07-13 15:44:35.626664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.872 [2024-07-13 15:44:35.626682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.638664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.638697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.638716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.653022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.653049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.653065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.667164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.667193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.667228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.681052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.681083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.681100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.694196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.694241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.694261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.707016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.707044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.707060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.720375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.720409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.720428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.733729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.733762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.733781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.747705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.747737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.130 [2024-07-13 15:44:35.747756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.130 [2024-07-13 15:44:35.762279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.130 [2024-07-13 15:44:35.762313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.762331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.774347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.774380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.774398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.789423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.789456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.789475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.802039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.802068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.802091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.816361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.816394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.816413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.829989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.830019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.830036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.844850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.844891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.844926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 [2024-07-13 15:44:35.856656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8c20d0) 00:33:05.131 [2024-07-13 15:44:35.856688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.131 [2024-07-13 15:44:35.856707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.131 00:33:05.131 Latency(us) 00:33:05.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.131 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:05.131 nvme0n1 : 2.00 18542.91 72.43 0.00 0.00 6893.04 3373.89 19223.89 00:33:05.131 =================================================================================================================== 00:33:05.131 Total : 18542.91 72.43 0.00 0.00 6893.04 3373.89 19223.89 00:33:05.131 0 00:33:05.131 15:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:05.131 15:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:05.131 15:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:05.131 15:44:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:05.131 | .driver_specific 00:33:05.131 | .nvme_error 00:33:05.131 | .status_code 00:33:05.131 | .command_transient_transport_error' 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1254174 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1254174 ']' 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1254174 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254174 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254174' 00:33:05.389 killing process with pid 1254174 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1254174 00:33:05.389 Received shutdown signal, test time was about 2.000000 seconds 00:33:05.389 00:33:05.389 Latency(us) 00:33:05.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.389 =================================================================================================================== 00:33:05.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:05.389 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1254174 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1254628 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1254628 /var/tmp/bperf.sock 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1254628 ']' 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:05.647 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:05.647 [2024-07-13 15:44:36.409009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:05.648 [2024-07-13 15:44:36.409095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254628 ] 00:33:05.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:05.648 Zero copy mechanism will not be used. 00:33:05.905 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.905 [2024-07-13 15:44:36.441019] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:05.905 [2024-07-13 15:44:36.469253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.905 [2024-07-13 15:44:36.557341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.905 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:05.905 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:05.905 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:05.906 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:06.163 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:06.163 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.163 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.163 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.163 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.163 15:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.728 nvme0n1 00:33:06.728 15:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:06.728 15:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.728 15:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:06.728 15:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.728 15:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:06.728 15:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:06.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:06.728 Zero copy mechanism will not be used. 00:33:06.728 Running I/O for 2 seconds... 00:33:06.728 [2024-07-13 15:44:37.422247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.728 [2024-07-13 15:44:37.422307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.728 [2024-07-13 15:44:37.422329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.728 [2024-07-13 15:44:37.434691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.728 [2024-07-13 15:44:37.434728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.728 [2024-07-13 15:44:37.434749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.728 [2024-07-13 15:44:37.446483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.728 [2024-07-13 15:44:37.446519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.728 [2024-07-13 15:44:37.446540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.728 [2024-07-13 15:44:37.458262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.728 [2024-07-13 15:44:37.458299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.728 [2024-07-13 15:44:37.458320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.728 [2024-07-13 15:44:37.470246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.728 [2024-07-13 15:44:37.470293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.728 [2024-07-13 15:44:37.470322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.728 [2024-07-13 15:44:37.482466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.728 [2024-07-13 15:44:37.482503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.728 [2024-07-13 15:44:37.482523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.494788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.494825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.494845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.506620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.506658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.506677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.519261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.519298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.519319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.531032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.531064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.531081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.543476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.543513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.543533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.555374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.555410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.567002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.567033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.567050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.578698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.578741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.578761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.590836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.590885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.590920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.603350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.603387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.603406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.616835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.616878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.616917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.630664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.630700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.986 [2024-07-13 15:44:37.644367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.986 [2024-07-13 15:44:37.644404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.986 [2024-07-13 15:44:37.644423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.657372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.657409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.657429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.671412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.671448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.671467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.684887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.684935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.684952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.698016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.698047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.698064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.710877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.710926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.710943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.723657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.723693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.723713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.737719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:06.987 [2024-07-13 15:44:37.737755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.987 [2024-07-13 15:44:37.737774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.987 [2024-07-13 15:44:37.751761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.751796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.751817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.765808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.765844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.765863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.777170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.777201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.777233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.790743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.790779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.790800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.802311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.802347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.802373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.816303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.816339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.816360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.829438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.829475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.829494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.842575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.842610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.842630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.856020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.856065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.856081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.869613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.869649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.869669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.883320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.894729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.894765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.894784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.908034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.908080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.908097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.921304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.921340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.921360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.935303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.935339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.935358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.949072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.949105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.962900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.962947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.962965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.977240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.977276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.977296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:37.991265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:37.991301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:37.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.245 [2024-07-13 15:44:38.005044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.245 [2024-07-13 15:44:38.005089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.245 [2024-07-13 15:44:38.005106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.019394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.019431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.503 [2024-07-13 15:44:38.019450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.034359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.034395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.503 [2024-07-13 15:44:38.034421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.047064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.047097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.503 [2024-07-13 15:44:38.047114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.060778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.060814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.503 [2024-07-13 15:44:38.060833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.074375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.074412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.503 [2024-07-13 15:44:38.074432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.087120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.087175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.503 [2024-07-13 15:44:38.087194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.503 [2024-07-13 15:44:38.098905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.503 [2024-07-13 15:44:38.098953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.098971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.109515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.109551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.109571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.121417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.121454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.121473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.133354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.133390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.133410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.144852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.144916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.144934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.156846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.156887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.156920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.168697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.168732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.168751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.180240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.180275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.180294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.192033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.192078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.192095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.203892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.203939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.203957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.215943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.215989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.216006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.228243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.228279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.228298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.239983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.240030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.240047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.251688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.251723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.251742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.504 [2024-07-13 15:44:38.263447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.504 [2024-07-13 15:44:38.263482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.504 [2024-07-13 15:44:38.263502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.275187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.275222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.275242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.286967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.287017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.287033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.296952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.296997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.297013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.307979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.308026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.308042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.319020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.319064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.319080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.330059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.330106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.330123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.341069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.341100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.341125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.352102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.352134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.352166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.363264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.363300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.374386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.374441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.385478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.385513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.385532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.396548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.396583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.396603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.407661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.407698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.407718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.418666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.762 [2024-07-13 15:44:38.418702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.762 [2024-07-13 15:44:38.418721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.762 [2024-07-13 15:44:38.429581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.429617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.429636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.440735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.440776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.440796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.452041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.452073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.452090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.463258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.463294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.463313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.474281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.474317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.474336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.485287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.485322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.485341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.496471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.496507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.496526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.507493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.507529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.507548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:07.763 [2024-07-13 15:44:38.518482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:07.763 [2024-07-13 15:44:38.518517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:07.763 [2024-07-13 15:44:38.518536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.529466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.529500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.529519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.540485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.540520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.540539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.551523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.551558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.551577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.562523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.562559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.562578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.573523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.573558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.573577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.584575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.584609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.584628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.595495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.595530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.595549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.606487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.606522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.606541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.617428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.617462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.617481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.628810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.628846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.628879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.640403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.640438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.640457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.651428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.651463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.651481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.662437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.662472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.673534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.673568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.673587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.684505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.684540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.684559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.021 [2024-07-13 15:44:38.695472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.021 [2024-07-13 15:44:38.695508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.021 [2024-07-13 15:44:38.695527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.706528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.706564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.706583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.717758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.717793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.717812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.728776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.728811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.728830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.740093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.740126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.740157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.751635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.751671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.751690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.762742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.762777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.762796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.773980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.774043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.022 [2024-07-13 15:44:38.785216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.022 [2024-07-13 15:44:38.785250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.022 [2024-07-13 15:44:38.785270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.796391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.796426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.796445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.807445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.807479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.807498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.818511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.818547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.818573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.829511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.829546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.829565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.840497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.840531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.840551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.851509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.851543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.851562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.862545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.862579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.862597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.873508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.873542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.873561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.884501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.884535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.884553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.895573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.906542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.906576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.906595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.918109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.918163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.918181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.929122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.929153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.929188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.940111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.940142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.940159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.951163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.951197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.951215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.962119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.962149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.962182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.973087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.973119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.973136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.984094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.984125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.984142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:38.995266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:38.995301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:38.995320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:39.006348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:39.006382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.280 [2024-07-13 15:44:39.006401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.280 [2024-07-13 15:44:39.017416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.280 [2024-07-13 15:44:39.017450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.281 [2024-07-13 15:44:39.017468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.281 [2024-07-13 15:44:39.028496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.281 [2024-07-13 15:44:39.028530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.281 [2024-07-13 15:44:39.028549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.281 [2024-07-13 15:44:39.039630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.281 [2024-07-13 15:44:39.039664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.281 [2024-07-13 15:44:39.039683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.539 [2024-07-13 15:44:39.050578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.539 [2024-07-13 15:44:39.050612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.539 [2024-07-13 15:44:39.050631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.539 [2024-07-13 15:44:39.061556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.539 [2024-07-13 15:44:39.061590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.539 [2024-07-13 15:44:39.061609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.539 [2024-07-13 15:44:39.072564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.539 [2024-07-13 15:44:39.072598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.539 [2024-07-13 15:44:39.072617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.539 [2024-07-13 15:44:39.083548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.539 [2024-07-13 15:44:39.083583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.539 [2024-07-13 15:44:39.083602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.539 [2024-07-13 15:44:39.094664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.539 [2024-07-13 15:44:39.094699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.539 [2024-07-13 15:44:39.094718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.539 [2024-07-13 15:44:39.105800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.539 [2024-07-13 15:44:39.105834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.105860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.116767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.116800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.116819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.127830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.127896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.138836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.138878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.138913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.149690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.149743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.160692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.160727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.160745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.171654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.171687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.171706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.182590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.182624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.193676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.193710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.193729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.204733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.204772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.204792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.215761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.215795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.215813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.226775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.226809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.226827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.238105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.238135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.238153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.249245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.249280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.249299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.260465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.260499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.260518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.271477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.271511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.271530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.282464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.282498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.282517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.293488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.293522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.293540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.540 [2024-07-13 15:44:39.304486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.540 [2024-07-13 15:44:39.304520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.540 [2024-07-13 15:44:39.304539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.315525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.315559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.315578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.326680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.326714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.326732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.337796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.337830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.337849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.348673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.348707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.348725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.359742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.359776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.359795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.370817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.370850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.370876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.381808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.381842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.381860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.393099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.393130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.393154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.799 [2024-07-13 15:44:39.404188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2276f00) 00:33:08.799 [2024-07-13 15:44:39.404235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.799 [2024-07-13 15:44:39.404254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.799 00:33:08.799 Latency(us) 00:33:08.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.799 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:08.799 nvme0n1 : 2.00 2646.01 330.75 0.00 0.00 6041.98 1614.13 14854.83 00:33:08.799 =================================================================================================================== 00:33:08.799 Total : 2646.01 330.75 0.00 0.00 6041.98 1614.13 14854.83 00:33:08.799 0 00:33:08.799 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:08.799 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:08.799 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:08.799 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:08.799 | .driver_specific 00:33:08.799 | .nvme_error 00:33:08.799 | .status_code 00:33:08.799 | .command_transient_transport_error' 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1254628 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1254628 ']' 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1254628 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254628 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254628' 00:33:09.057 killing process with pid 1254628 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1254628 00:33:09.057 Received shutdown signal, test time was about 2.000000 seconds 00:33:09.057 00:33:09.057 Latency(us) 00:33:09.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.057 =================================================================================================================== 00:33:09.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.057 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1254628 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1255036 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1255036 /var/tmp/bperf.sock 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1255036 ']' 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:09.316 15:44:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.316 [2024-07-13 15:44:39.958638] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:09.316 [2024-07-13 15:44:39.958710] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255036 ] 00:33:09.316 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.316 [2024-07-13 15:44:39.991691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:09.316 [2024-07-13 15:44:40.020079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.574 [2024-07-13 15:44:40.113074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.574 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.574 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:09.574 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.574 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:09.832 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:09.832 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.832 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:09.832 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.832 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.832 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.399 nvme0n1 00:33:10.399 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:10.399 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.399 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:10.399 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.399 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:10.399 15:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:10.399 Running I/O for 2 seconds... 00:33:10.399 [2024-07-13 15:44:41.047065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ed920 00:33:10.399 [2024-07-13 15:44:41.048349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.048391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.060897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fef90 00:33:10.399 [2024-07-13 15:44:41.062197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.062225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.074217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ee190 00:33:10.399 [2024-07-13 15:44:41.075727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.075760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.086420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e4578 00:33:10.399 [2024-07-13 15:44:41.087923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.087950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.099887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e7818 00:33:10.399 [2024-07-13 15:44:41.101554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.101587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.112096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fa7d8 00:33:10.399 [2024-07-13 15:44:41.113212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.113241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.125045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e4de8 00:33:10.399 [2024-07-13 15:44:41.125979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.126021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.137846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f57b0 00:33:10.399 [2024-07-13 15:44:41.139165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.139193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.150832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190eaef0 00:33:10.399 [2024-07-13 15:44:41.152342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.399 [2024-07-13 15:44:41.152373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:10.399 [2024-07-13 15:44:41.163886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e9e10 00:33:10.659 [2024-07-13 15:44:41.165498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.165530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.177244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f4f40 00:33:10.659 [2024-07-13 15:44:41.178932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.178958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.188009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e9168 00:33:10.659 [2024-07-13 15:44:41.188785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.202528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fda78 00:33:10.659 [2024-07-13 15:44:41.204336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.204367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.214347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190df988 00:33:10.659 [2024-07-13 15:44:41.215623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.215654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.227298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ef270 00:33:10.659 [2024-07-13 15:44:41.228452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.228484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.239128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fd208 00:33:10.659 [2024-07-13 15:44:41.241226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.241254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.251068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f20d8 00:33:10.659 [2024-07-13 15:44:41.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.252080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.264187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fc128 00:33:10.659 [2024-07-13 15:44:41.265327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.265358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.277099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e8088 00:33:10.659 [2024-07-13 15:44:41.278243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.289773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e01f8 00:33:10.659 [2024-07-13 15:44:41.290930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.290956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.302660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f6cc8 00:33:10.659 [2024-07-13 15:44:41.303636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.303668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.315707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e1b48 00:33:10.659 [2024-07-13 15:44:41.316830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.316861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.327447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190feb58 00:33:10.659 [2024-07-13 15:44:41.329524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.329555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.339523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f96f8 00:33:10.659 [2024-07-13 15:44:41.340472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.340503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.352578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f0788 00:33:10.659 [2024-07-13 15:44:41.353737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.353768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.365836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fa3a0 00:33:10.659 [2024-07-13 15:44:41.367143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.367174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.378758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e9e10 00:33:10.659 [2024-07-13 15:44:41.380102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.380130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.390612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f6cc8 00:33:10.659 [2024-07-13 15:44:41.391925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.391951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.404651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e38d0 00:33:10.659 [2024-07-13 15:44:41.406136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.406179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.659 [2024-07-13 15:44:41.417822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e12d8 00:33:10.659 [2024-07-13 15:44:41.419479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.659 [2024-07-13 15:44:41.419507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.918 [2024-07-13 15:44:41.428874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ed920 00:33:10.918 [2024-07-13 15:44:41.429610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.429638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.442087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190de8a8 00:33:10.919 [2024-07-13 15:44:41.442973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.443016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.456551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190feb58 00:33:10.919 [2024-07-13 15:44:41.458527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.458558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.468463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f1868 00:33:10.919 [2024-07-13 15:44:41.469947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.469974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.479942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e7c50 00:33:10.919 [2024-07-13 15:44:41.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.482046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.492179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190df550 00:33:10.919 [2024-07-13 15:44:41.493128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.493153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.505145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f1430 00:33:10.919 [2024-07-13 15:44:41.506274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.506305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.517209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f6020 00:33:10.919 [2024-07-13 15:44:41.518334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.518365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.530477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fda78 00:33:10.919 [2024-07-13 15:44:41.531784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.531815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.543879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190dece0 00:33:10.919 [2024-07-13 15:44:41.545408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.557124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e8d30 00:33:10.919 [2024-07-13 15:44:41.558707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.558735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.570312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f6020 00:33:10.919 [2024-07-13 15:44:41.572129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.572172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.581804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ed0b0 00:33:10.919 [2024-07-13 15:44:41.583005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.593706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e4140 00:33:10.919 [2024-07-13 15:44:41.594721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.594749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.604729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e0630 00:33:10.919 [2024-07-13 15:44:41.606455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.606482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.614912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f9f68 00:33:10.919 [2024-07-13 15:44:41.615727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.615753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.628054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f96f8 00:33:10.919 [2024-07-13 15:44:41.629069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.629095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.640311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e1b48 00:33:10.919 [2024-07-13 15:44:41.641440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.641468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.652608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f5378 00:33:10.919 [2024-07-13 15:44:41.653957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.653984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.664873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e73e0 00:33:10.919 [2024-07-13 15:44:41.666386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.666413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:10.919 [2024-07-13 15:44:41.676082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e99d8 00:33:10.919 [2024-07-13 15:44:41.677501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.919 [2024-07-13 15:44:41.677528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.688862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190efae0 00:33:11.179 [2024-07-13 15:44:41.690528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.690560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.701238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190feb58 00:33:11.179 [2024-07-13 15:44:41.703009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.703037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.712108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f46d0 00:33:11.179 [2024-07-13 15:44:41.713500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.713528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.722648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e12d8 00:33:11.179 [2024-07-13 15:44:41.724635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.724664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.732898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e4578 00:33:11.179 [2024-07-13 15:44:41.733714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.733739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.745064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f6020 00:33:11.179 [2024-07-13 15:44:41.746063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.746089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.758167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190fef90 00:33:11.179 [2024-07-13 15:44:41.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.769210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ebfd0 00:33:11.179 [2024-07-13 15:44:41.770399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.770426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.781459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ebb98 00:33:11.179 [2024-07-13 15:44:41.782782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.782809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.793663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f8e88 00:33:11.179 [2024-07-13 15:44:41.795088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.795114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.805872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e1710 00:33:11.179 [2024-07-13 15:44:41.807530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.807556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.818272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e7c50 00:33:11.179 [2024-07-13 15:44:41.820056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.179 [2024-07-13 15:44:41.820084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:11.179 [2024-07-13 15:44:41.830578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f7100 00:33:11.179 [2024-07-13 15:44:41.832540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.832567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.838792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190de038 00:33:11.180 [2024-07-13 15:44:41.839700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.839727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.850817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ed4e8 00:33:11.180 [2024-07-13 15:44:41.851688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.851716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.862757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ef6a8 00:33:11.180 [2024-07-13 15:44:41.863688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.863717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.874742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190ecc78 00:33:11.180 [2024-07-13 15:44:41.875729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.875757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.886784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e7818 00:33:11.180 [2024-07-13 15:44:41.887670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.887698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.899087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190f5378 00:33:11.180 [2024-07-13 15:44:41.900121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.900164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.912207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.180 [2024-07-13 15:44:41.912576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.912602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.925636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.180 [2024-07-13 15:44:41.925933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.925960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.180 [2024-07-13 15:44:41.938840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.180 [2024-07-13 15:44:41.939105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.180 [2024-07-13 15:44:41.939147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:41.952419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:41.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:41.952738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:41.965669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:41.965953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:41.965994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:41.978995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:41.979288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:41.979315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:41.992433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:41.992713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:41.992754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.005677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.005959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.006007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.019021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.019322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.019348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.032303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.032583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.032626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.045495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.045819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.045845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.058752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.059024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.059050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.071920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.072172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.072214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.085097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.085399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.085425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.098389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.098704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.098730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.111682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.111954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.111995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.124908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.125228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.138074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.138328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.138355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.151320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.439 [2024-07-13 15:44:42.151646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.439 [2024-07-13 15:44:42.151672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.439 [2024-07-13 15:44:42.164634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.440 [2024-07-13 15:44:42.164943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.440 [2024-07-13 15:44:42.164970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.440 [2024-07-13 15:44:42.177976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.440 [2024-07-13 15:44:42.178267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.440 [2024-07-13 15:44:42.178294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.440 [2024-07-13 15:44:42.191340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.440 [2024-07-13 15:44:42.191714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.440 [2024-07-13 15:44:42.191740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.698 [2024-07-13 15:44:42.204888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.205214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.205240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.218248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.218541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.218567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.231659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.232006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.232034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.245113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.245414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.258577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.258951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.258978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.271991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.272327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.272354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.285453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.285744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.285770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.298885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.299144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.299185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.312039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.312363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.312389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.325116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.325476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.325503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.338345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.338641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.338667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.351620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.351903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.351950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.364938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.365205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.365251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.378131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.378496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.378522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.391284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.391647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.391673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.404576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.404937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.404963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.417915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.418168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.418210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.431212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.431634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.431660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.444402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.444763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.444788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.699 [2024-07-13 15:44:42.457688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.699 [2024-07-13 15:44:42.457942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.699 [2024-07-13 15:44:42.457969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.958 [2024-07-13 15:44:42.471038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.958 [2024-07-13 15:44:42.471389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.958 [2024-07-13 15:44:42.471422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.958 [2024-07-13 15:44:42.484346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.958 [2024-07-13 15:44:42.484664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.958 [2024-07-13 15:44:42.484690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.498181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.498552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.498594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.511403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.511727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.511753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.524793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.525080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.525106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.538075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.538369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.538395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.551461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.551719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.551760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.564763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.565040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.565067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.577806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.578069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.578095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.591115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.591482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.591509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.604288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.604643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.604669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.617592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.617880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.630843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.631133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.631174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.644032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.644391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.644417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.657268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.657654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.657682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.670530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.670889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.670916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.683682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.683961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.683988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.696918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.697198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.697239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.710120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.710478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.710504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:11.959 [2024-07-13 15:44:42.723466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:11.959 [2024-07-13 15:44:42.723743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.959 [2024-07-13 15:44:42.723769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.736847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.737201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.737228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.750043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.750337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.750363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.763433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.763756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.763781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.776669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.776983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.777026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.790099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.790410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.790436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.803471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.803727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.803754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.816898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.817158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.817190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.830069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.830420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.830446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.843378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.843684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.843724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.856622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.856966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.856992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.870016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.218 [2024-07-13 15:44:42.870381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.218 [2024-07-13 15:44:42.870406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.218 [2024-07-13 15:44:42.883877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.884162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.884204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.898043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.898346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.898376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.912157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.912464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.912495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.926281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.926596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.926626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.940399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.940721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.940751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.954544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.954857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.954896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.968770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.969093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.969135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.219 [2024-07-13 15:44:42.982934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.219 [2024-07-13 15:44:42.983216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.219 [2024-07-13 15:44:42.983243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.476 [2024-07-13 15:44:42.997184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.476 [2024-07-13 15:44:42.997497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.476 [2024-07-13 15:44:42.997527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.476 [2024-07-13 15:44:43.011306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.476 [2024-07-13 15:44:43.011634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.476 [2024-07-13 15:44:43.011664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.476 [2024-07-13 15:44:43.025425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.476 [2024-07-13 15:44:43.025734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.476 [2024-07-13 15:44:43.025764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.476 [2024-07-13 15:44:43.039618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc49f0) with pdu=0x2000190e6738 00:33:12.476 [2024-07-13 15:44:43.039945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.476 [2024-07-13 15:44:43.039971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:12.476 00:33:12.476 Latency(us) 00:33:12.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.476 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.476 nvme0n1 : 2.01 19736.45 77.10 0.00 0.00 6469.73 2475.80 15728.64 00:33:12.476 =================================================================================================================== 00:33:12.476 Total : 19736.45 77.10 0.00 0.00 6469.73 2475.80 15728.64 00:33:12.476 0 00:33:12.476 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:12.476 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:12.476 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:12.476 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:12.476 | .driver_specific 00:33:12.476 | .nvme_error 00:33:12.476 | .status_code 00:33:12.476 | .command_transient_transport_error' 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1255036 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1255036 ']' 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1255036 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255036 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255036' 00:33:12.733 killing process with pid 1255036 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1255036 00:33:12.733 Received shutdown signal, test time was about 2.000000 seconds 00:33:12.733 00:33:12.733 Latency(us) 00:33:12.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.733 =================================================================================================================== 00:33:12.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:12.733 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1255036 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1255445 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1255445 /var/tmp/bperf.sock 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1255445 ']' 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:12.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.990 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 [2024-07-13 15:44:43.617494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:12.990 [2024-07-13 15:44:43.617573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255445 ] 00:33:12.990 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:12.990 Zero copy mechanism will not be used. 00:33:12.990 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.990 [2024-07-13 15:44:43.648881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:12.990 [2024-07-13 15:44:43.680871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.247 [2024-07-13 15:44:43.769430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.247 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.247 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:13.247 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.248 15:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.504 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:13.504 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.504 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.505 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.505 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.505 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.093 nvme0n1 00:33:14.093 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:14.093 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.093 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:14.093 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.093 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:14.093 15:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:14.093 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:14.093 Zero copy mechanism will not be used. 00:33:14.093 Running I/O for 2 seconds... 00:33:14.093 [2024-07-13 15:44:44.735715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.736136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.736201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.750764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.751163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.751212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.766758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.767132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.767161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.781174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.781659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.781691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.796625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.797024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.797067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.812558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.812956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.812997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.827216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.827646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.827674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.842367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.842748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.842776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.093 [2024-07-13 15:44:44.857185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.093 [2024-07-13 15:44:44.857538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.093 [2024-07-13 15:44:44.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.350 [2024-07-13 15:44:44.870876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.350 [2024-07-13 15:44:44.871291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.871331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.886928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.887290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.887333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.900470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.900838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.900879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.916087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.916382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.916410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.929664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.930083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.930111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.943527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.943906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.943947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.957588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.957983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.958025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.973172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.973540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.973568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:44.989024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:44.989208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:44.989237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.004777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.005167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.005217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.019097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.019478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.019506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.034505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.034791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.034818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.049240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.049598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.049624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.063275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.063638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.063682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.078044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.078415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.078443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.091015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.091375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.091401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.351 [2024-07-13 15:44:45.106833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.351 [2024-07-13 15:44:45.107251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.351 [2024-07-13 15:44:45.107294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.608 [2024-07-13 15:44:45.122268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.608 [2024-07-13 15:44:45.122647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.608 [2024-07-13 15:44:45.122687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.608 [2024-07-13 15:44:45.137536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.608 [2024-07-13 15:44:45.137913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.608 [2024-07-13 15:44:45.137956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.608 [2024-07-13 15:44:45.151713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.608 [2024-07-13 15:44:45.152116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.152145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.167162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.167553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.167598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.183084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.183513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.197109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.197479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.197505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.212789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.213217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.213260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.228498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.228904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.228933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.243711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.244079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.244108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.259417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.259774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.259816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.274900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.275302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.275328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.290181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.290627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.290654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.304972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.305333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.305378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.319280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.319612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.319639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.333887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.334300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.348943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.349307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.349350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.609 [2024-07-13 15:44:45.364265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.609 [2024-07-13 15:44:45.364625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.609 [2024-07-13 15:44:45.364651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.379121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.379490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.379517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.393768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.394156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.394205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.409342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.409691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.409719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.425004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.425424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.425451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.439783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.440127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.440170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.454432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.454807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.454834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.469135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.469505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.469532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.483508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.483903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.483944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.498123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.498521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.498548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.513050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.513368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.513394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.526152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.526640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.526668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.867 [2024-07-13 15:44:45.541610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.867 [2024-07-13 15:44:45.542269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.867 [2024-07-13 15:44:45.542296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.868 [2024-07-13 15:44:45.557114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.868 [2024-07-13 15:44:45.557604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.868 [2024-07-13 15:44:45.557632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.868 [2024-07-13 15:44:45.571524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.868 [2024-07-13 15:44:45.572023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.868 [2024-07-13 15:44:45.572052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:14.868 [2024-07-13 15:44:45.585948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.868 [2024-07-13 15:44:45.586472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.868 [2024-07-13 15:44:45.586499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:14.868 [2024-07-13 15:44:45.600377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.868 [2024-07-13 15:44:45.600945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.868 [2024-07-13 15:44:45.600989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:14.868 [2024-07-13 15:44:45.613953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.868 [2024-07-13 15:44:45.614528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.868 [2024-07-13 15:44:45.614570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.868 [2024-07-13 15:44:45.628827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:14.868 [2024-07-13 15:44:45.629362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.868 [2024-07-13 15:44:45.629392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.642333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.642844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.642895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.656138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.656658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.656686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.669654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.670109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.670154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.684181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.684750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.684778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.698718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.699149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.699200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.712727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.713268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.713296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.726669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.727066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.727094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.741217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.741765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.741792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.755923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.756371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.756399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.770898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.771387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.771420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.784126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.784523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.784551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.798322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.798929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.798958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.811800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.812324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.812352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.826064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.826561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.826588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.839569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.840063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.840092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.853521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.854106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.854135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.867135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.126 [2024-07-13 15:44:45.867742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.126 [2024-07-13 15:44:45.867770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.126 [2024-07-13 15:44:45.880531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.127 [2024-07-13 15:44:45.880923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.127 [2024-07-13 15:44:45.880950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.893482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.893902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.907427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.907911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.907948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.922411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.922978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.923007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.937043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.937566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.937594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.951740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.952288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.952316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.965848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.966463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.979791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.980190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.980219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:45.995250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:45.995720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:45.995749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.008590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.009234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.009263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.022419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.022925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.022955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.036272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.036713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.036741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.049736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.050180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.050208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.063828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.064383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.077940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.078381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.078409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.091720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.092219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.092247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.105344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.105778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.105806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.118756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.386 [2024-07-13 15:44:46.119195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.386 [2024-07-13 15:44:46.119224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.386 [2024-07-13 15:44:46.132019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.387 [2024-07-13 15:44:46.132498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.387 [2024-07-13 15:44:46.132535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.387 [2024-07-13 15:44:46.146001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.387 [2024-07-13 15:44:46.146518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.387 [2024-07-13 15:44:46.146547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.159321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.159854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.159890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.173405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.173950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.173979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.188171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.188610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.188638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.202710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.203197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.203225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.215704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.216091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.216119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.228960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.229387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.229415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.242571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.243140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.243183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.256318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.256712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.256741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.269407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.269901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.269929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.283285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.283776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.283805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.296467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.296935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.296964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.310563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.311004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.311032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.323776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.324209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.324238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.338436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.338877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.338906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.351757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.352218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.352248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.365641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.645 [2024-07-13 15:44:46.366184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.645 [2024-07-13 15:44:46.366213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.645 [2024-07-13 15:44:46.379387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.646 [2024-07-13 15:44:46.379761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.646 [2024-07-13 15:44:46.379789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.646 [2024-07-13 15:44:46.392539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.646 [2024-07-13 15:44:46.392979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.646 [2024-07-13 15:44:46.393007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.646 [2024-07-13 15:44:46.406349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.646 [2024-07-13 15:44:46.406969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.646 [2024-07-13 15:44:46.406998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.418839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.419264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.419293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.431271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.431798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.431827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.445252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.445632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.445661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.458539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.459035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.459063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.472425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.472925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.472954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.485675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.486010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.486046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.498121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.498537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.498564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.510813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.511146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.904 [2024-07-13 15:44:46.511175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.904 [2024-07-13 15:44:46.524265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.904 [2024-07-13 15:44:46.524654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.524683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.537690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.538203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.538232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.551787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.552167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.552195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.565048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.565639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.565667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.577560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.578139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.578182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.590288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.590938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.590967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.603294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.603799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.603844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.618268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.618756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.618784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.632361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.632900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.632929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.646656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.647088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.647116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:15.905 [2024-07-13 15:44:46.658863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:15.905 [2024-07-13 15:44:46.659330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.905 [2024-07-13 15:44:46.659372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.163 [2024-07-13 15:44:46.672167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:16.163 [2024-07-13 15:44:46.672812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.164 [2024-07-13 15:44:46.672841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.164 [2024-07-13 15:44:46.684923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:16.164 [2024-07-13 15:44:46.685347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.164 [2024-07-13 15:44:46.685377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:16.164 [2024-07-13 15:44:46.698586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:16.164 [2024-07-13 15:44:46.699034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.164 [2024-07-13 15:44:46.699063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:16.164 [2024-07-13 15:44:46.711773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:16.164 [2024-07-13 15:44:46.712202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.164 [2024-07-13 15:44:46.712236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.164 [2024-07-13 15:44:46.725288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc4d30) with pdu=0x2000190fef90 00:33:16.164 [2024-07-13 15:44:46.725814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.164 [2024-07-13 15:44:46.725842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:16.164 00:33:16.164 Latency(us) 00:33:16.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.164 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:16.164 nvme0n1 : 2.01 2183.95 272.99 0.00 0.00 7308.73 4903.06 16699.54 00:33:16.164 =================================================================================================================== 00:33:16.164 Total : 2183.95 272.99 0.00 0.00 7308.73 4903.06 16699.54 00:33:16.164 0 00:33:16.164 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:16.164 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:16.164 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:16.164 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:16.164 | .driver_specific 00:33:16.164 | .nvme_error 00:33:16.164 | .status_code 00:33:16.164 | .command_transient_transport_error' 00:33:16.422 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:33:16.422 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1255445 00:33:16.422 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1255445 ']' 00:33:16.422 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1255445 00:33:16.422 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:16.422 15:44:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:16.422 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255445 00:33:16.422 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:16.422 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:16.422 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255445' 00:33:16.422 killing process with pid 1255445 00:33:16.422 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1255445 00:33:16.422 Received shutdown signal, test time was about 2.000000 seconds 00:33:16.422 00:33:16.422 Latency(us) 00:33:16.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.422 =================================================================================================================== 00:33:16.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:16.422 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1255445 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1254080 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1254080 ']' 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1254080 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1254080 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1254080' 00:33:16.680 killing process with pid 1254080 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1254080 00:33:16.680 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1254080 00:33:16.939 00:33:16.939 real 0m15.121s 00:33:16.939 user 0m30.198s 00:33:16.939 sys 0m3.876s 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:16.939 ************************************ 00:33:16.939 END TEST nvmf_digest_error 00:33:16.939 ************************************ 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:16.939 rmmod nvme_tcp 00:33:16.939 rmmod nvme_fabrics 00:33:16.939 rmmod nvme_keyring 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1254080 ']' 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1254080 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1254080 ']' 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1254080 00:33:16.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1254080) - No such process 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1254080 is not found' 00:33:16.939 Process with pid 1254080 is not found 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.939 15:44:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.473 15:44:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:19.474 00:33:19.474 real 0m34.704s 00:33:19.474 user 1m1.903s 00:33:19.474 sys 0m9.159s 00:33:19.474 15:44:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:19.474 15:44:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.474 ************************************ 00:33:19.474 END TEST nvmf_digest 00:33:19.474 ************************************ 00:33:19.474 15:44:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:19.474 15:44:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:19.474 15:44:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:19.474 15:44:49 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:19.474 15:44:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:19.474 15:44:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:19.474 15:44:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.474 15:44:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:19.474 ************************************ 00:33:19.474 START TEST nvmf_bdevperf 00:33:19.474 ************************************ 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:19.474 * Looking for test storage... 00:33:19.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:19.474 15:44:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:21.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:21.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.377 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:21.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:21.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:21.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:33:21.378 00:33:21.378 --- 10.0.0.2 ping statistics --- 00:33:21.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.378 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:33:21.378 00:33:21.378 --- 10.0.0.1 ping statistics --- 00:33:21.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.378 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1257790 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1257790 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1257790 ']' 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:21.378 15:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.378 [2024-07-13 15:44:52.030637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:21.378 [2024-07-13 15:44:52.030721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.378 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.378 [2024-07-13 15:44:52.069055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:21.378 [2024-07-13 15:44:52.101481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:21.637 [2024-07-13 15:44:52.196358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.637 [2024-07-13 15:44:52.196425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.637 [2024-07-13 15:44:52.196442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.637 [2024-07-13 15:44:52.196456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.637 [2024-07-13 15:44:52.196468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.637 [2024-07-13 15:44:52.196538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:21.637 [2024-07-13 15:44:52.196593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:21.637 [2024-07-13 15:44:52.196596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.637 [2024-07-13 15:44:52.346772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.637 Malloc0 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.637 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.896 [2024-07-13 15:44:52.415904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:21.896 { 00:33:21.896 "params": { 00:33:21.896 "name": "Nvme$subsystem", 00:33:21.896 "trtype": "$TEST_TRANSPORT", 00:33:21.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.896 "adrfam": "ipv4", 00:33:21.896 "trsvcid": "$NVMF_PORT", 00:33:21.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.896 "hdgst": ${hdgst:-false}, 00:33:21.896 "ddgst": ${ddgst:-false} 00:33:21.896 }, 00:33:21.896 "method": "bdev_nvme_attach_controller" 00:33:21.896 } 00:33:21.896 EOF 00:33:21.896 )") 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:21.896 15:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:21.896 "params": { 00:33:21.896 "name": "Nvme1", 00:33:21.896 "trtype": "tcp", 00:33:21.896 "traddr": "10.0.0.2", 00:33:21.896 "adrfam": "ipv4", 00:33:21.896 "trsvcid": "4420", 00:33:21.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.896 "hdgst": false, 00:33:21.896 "ddgst": false 00:33:21.896 }, 00:33:21.896 "method": "bdev_nvme_attach_controller" 00:33:21.896 }' 00:33:21.896 [2024-07-13 15:44:52.465386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:21.896 [2024-07-13 15:44:52.465452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257933 ] 00:33:21.896 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.896 [2024-07-13 15:44:52.499074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:21.896 [2024-07-13 15:44:52.527959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.896 [2024-07-13 15:44:52.616378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.466 Running I/O for 1 seconds... 00:33:23.397 00:33:23.397 Latency(us) 00:33:23.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:23.397 Verification LBA range: start 0x0 length 0x4000 00:33:23.397 Nvme1n1 : 1.01 8500.01 33.20 0.00 0.00 14996.55 2864.17 18835.53 00:33:23.397 =================================================================================================================== 00:33:23.397 Total : 8500.01 33.20 0.00 0.00 14996.55 2864.17 18835.53 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1258077 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:23.671 { 00:33:23.671 "params": { 00:33:23.671 "name": "Nvme$subsystem", 00:33:23.671 "trtype": "$TEST_TRANSPORT", 00:33:23.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.671 "adrfam": "ipv4", 00:33:23.671 "trsvcid": "$NVMF_PORT", 00:33:23.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.671 "hdgst": ${hdgst:-false}, 00:33:23.671 "ddgst": ${ddgst:-false} 00:33:23.671 }, 00:33:23.671 "method": "bdev_nvme_attach_controller" 00:33:23.671 } 00:33:23.671 EOF 00:33:23.671 )") 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:23.671 15:44:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:23.671 "params": { 00:33:23.671 "name": "Nvme1", 00:33:23.671 "trtype": "tcp", 00:33:23.671 "traddr": "10.0.0.2", 00:33:23.671 "adrfam": "ipv4", 00:33:23.671 "trsvcid": "4420", 00:33:23.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.671 "hdgst": false, 00:33:23.671 "ddgst": false 00:33:23.671 }, 00:33:23.671 "method": "bdev_nvme_attach_controller" 00:33:23.671 }' 00:33:23.671 [2024-07-13 15:44:54.234396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:23.671 [2024-07-13 15:44:54.234469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258077 ] 00:33:23.671 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.671 [2024-07-13 15:44:54.266668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.671 [2024-07-13 15:44:54.295324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.671 [2024-07-13 15:44:54.382889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.929 Running I/O for 15 seconds... 00:33:26.460 15:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1257790 00:33:26.460 15:44:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:26.460 [2024-07-13 15:44:57.204542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.204975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.204992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.460 [2024-07-13 15:44:57.205827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.460 [2024-07-13 15:44:57.205840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.205878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.205894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.205909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.205924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.205939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.205952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.205967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.205981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.205996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.461 [2024-07-13 15:44:57.206821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.206861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.206900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.206943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.206978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.206992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.207007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.207021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.207034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.207048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.207068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.207083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.461 [2024-07-13 15:44:57.207096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.461 [2024-07-13 15:44:57.207110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.462 [2024-07-13 15:44:57.207952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.207966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.207979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.462 [2024-07-13 15:44:57.208350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.462 [2024-07-13 15:44:57.208363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.463 [2024-07-13 15:44:57.208375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.463 [2024-07-13 15:44:57.208388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.463 [2024-07-13 15:44:57.208400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.463 [2024-07-13 15:44:57.208413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2561d60 is same with the state(5) to be set 00:33:26.463 [2024-07-13 15:44:57.208432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.463 [2024-07-13 15:44:57.208443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.463 [2024-07-13 15:44:57.208454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53592 len:8 PRP1 0x0 PRP2 0x0 00:33:26.463 [2024-07-13 15:44:57.208466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.463 [2024-07-13 15:44:57.208527] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2561d60 was disconnected and freed. reset controller. 00:33:26.463 [2024-07-13 15:44:57.211832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.463 [2024-07-13 15:44:57.211999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.463 [2024-07-13 15:44:57.212733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.463 [2024-07-13 15:44:57.212761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.463 [2024-07-13 15:44:57.212777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.463 [2024-07-13 15:44:57.213018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.463 [2024-07-13 15:44:57.213245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.463 [2024-07-13 15:44:57.213265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.463 [2024-07-13 15:44:57.213281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.463 [2024-07-13 15:44:57.216779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.226326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.226780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.226809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.226825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.227072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.227292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.227311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.227323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.230934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.240230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.240687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.240718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.240736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.240999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.241242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.241271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.241287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.244863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.254153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.254608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.254643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.254671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.254972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.255238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.255266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.255290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.258921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.268022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.268470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.268505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.268536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.268818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.269095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.269122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.269147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.272772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.281879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.282347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.282382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.282411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.282695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.282973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.282999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.283024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.286642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.295767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.296265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.296301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.296332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.296614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.296897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.296924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.296949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.300577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.309681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.310161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.310197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.310227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.310508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.310772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.310800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.310824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.314459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.323558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.324055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.324091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.324121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.324404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.324669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.324695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.324719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.328341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.337439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.337918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.337953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.337983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.338278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.338545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.338572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.338596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.342232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.351334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.351801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.351836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.351875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.352171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.352436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.352463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.352487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.356117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.365222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.365681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.365716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.365748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.366041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.722 [2024-07-13 15:44:57.366308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.722 [2024-07-13 15:44:57.366334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.722 [2024-07-13 15:44:57.366359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.722 [2024-07-13 15:44:57.369982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.722 [2024-07-13 15:44:57.379102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.722 [2024-07-13 15:44:57.379571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.722 [2024-07-13 15:44:57.379605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.722 [2024-07-13 15:44:57.379635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.722 [2024-07-13 15:44:57.379927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.380193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.380219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.380251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.383862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.392960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.393412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.393447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.393476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.393758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.394034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.394062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.394086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.397706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.406797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.407274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.407308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.407338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.407618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.407893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.407919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.407944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.411558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.420698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.421179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.421215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.421245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.421527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.421791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.421817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.421842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.425471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.434563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.435047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.435082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.435112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.435393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.435657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.435683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.435708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.439339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.448434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.448905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.448940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.448970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.449252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.449516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.449543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.449567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.453194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.462294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.462773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.462809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.462838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.463129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.463393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.463419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.463444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.467075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.723 [2024-07-13 15:44:57.476174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.723 [2024-07-13 15:44:57.476598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.723 [2024-07-13 15:44:57.476633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.723 [2024-07-13 15:44:57.476663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.723 [2024-07-13 15:44:57.476960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.723 [2024-07-13 15:44:57.477226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.723 [2024-07-13 15:44:57.477252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.723 [2024-07-13 15:44:57.477276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.723 [2024-07-13 15:44:57.480903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.982 [2024-07-13 15:44:57.490233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.982 [2024-07-13 15:44:57.490713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.982 [2024-07-13 15:44:57.490748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.982 [2024-07-13 15:44:57.490778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.982 [2024-07-13 15:44:57.491097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.982 [2024-07-13 15:44:57.491374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.982 [2024-07-13 15:44:57.491402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.982 [2024-07-13 15:44:57.491426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.982 [2024-07-13 15:44:57.495061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.982 [2024-07-13 15:44:57.504158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.982 [2024-07-13 15:44:57.504603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.982 [2024-07-13 15:44:57.504639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.982 [2024-07-13 15:44:57.504668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.982 [2024-07-13 15:44:57.504961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.982 [2024-07-13 15:44:57.505226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.982 [2024-07-13 15:44:57.505253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.982 [2024-07-13 15:44:57.505277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.982 [2024-07-13 15:44:57.508901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.982 [2024-07-13 15:44:57.518201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.982 [2024-07-13 15:44:57.518670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.982 [2024-07-13 15:44:57.518704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.982 [2024-07-13 15:44:57.518733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.982 [2024-07-13 15:44:57.519029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.982 [2024-07-13 15:44:57.519293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.982 [2024-07-13 15:44:57.519320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.982 [2024-07-13 15:44:57.519353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.982 [2024-07-13 15:44:57.522976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.982 [2024-07-13 15:44:57.532074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.982 [2024-07-13 15:44:57.532544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.982 [2024-07-13 15:44:57.532578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.982 [2024-07-13 15:44:57.532607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.982 [2024-07-13 15:44:57.532904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.982 [2024-07-13 15:44:57.533168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.982 [2024-07-13 15:44:57.533195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.982 [2024-07-13 15:44:57.533220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.982 [2024-07-13 15:44:57.536837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.982 [2024-07-13 15:44:57.545943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.982 [2024-07-13 15:44:57.546419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.982 [2024-07-13 15:44:57.546453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.982 [2024-07-13 15:44:57.546483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.982 [2024-07-13 15:44:57.546763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.982 [2024-07-13 15:44:57.547038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.982 [2024-07-13 15:44:57.547065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.982 [2024-07-13 15:44:57.547090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.982 [2024-07-13 15:44:57.550705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.982 [2024-07-13 15:44:57.559793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.982 [2024-07-13 15:44:57.560284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.982 [2024-07-13 15:44:57.560319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.982 [2024-07-13 15:44:57.560349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.982 [2024-07-13 15:44:57.560630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.982 [2024-07-13 15:44:57.560905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.982 [2024-07-13 15:44:57.560932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.560956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.564574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.573688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.574138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.574208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.574489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.574754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.574780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.574805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.578433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.587732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.588206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.588241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.588270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.588551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.588815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.588842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.588877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.592496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.601586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.602060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.602095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.602125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.602407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.602670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.602697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.602721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.606351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.615438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.615907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.615942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.615972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.616254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.616524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.616550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.616575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.620203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.629348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.629831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.629875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.629907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.630188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.630454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.630480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.630505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.634128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.643216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.643706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.643741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.643770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.644070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.644335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.644362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.644387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.648014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.657103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.657548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.657582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.657613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.657905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.658170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.658196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.658220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.661844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.670949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.671395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.671430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.671460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.671743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.672019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.672045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.672071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.675687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.684994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.685463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.685497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.685527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.685811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.686087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.686114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.686139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.689754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.698924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.699381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.699416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.699446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.699727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.700002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.983 [2024-07-13 15:44:57.700028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.983 [2024-07-13 15:44:57.700054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.983 [2024-07-13 15:44:57.703671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.983 [2024-07-13 15:44:57.712764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.983 [2024-07-13 15:44:57.713210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.983 [2024-07-13 15:44:57.713246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.983 [2024-07-13 15:44:57.713283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.983 [2024-07-13 15:44:57.713564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.983 [2024-07-13 15:44:57.713829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.984 [2024-07-13 15:44:57.713855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.984 [2024-07-13 15:44:57.713891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.984 [2024-07-13 15:44:57.717507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.984 [2024-07-13 15:44:57.726604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.984 [2024-07-13 15:44:57.727082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.984 [2024-07-13 15:44:57.727117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.984 [2024-07-13 15:44:57.727148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.984 [2024-07-13 15:44:57.727431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.984 [2024-07-13 15:44:57.727695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.984 [2024-07-13 15:44:57.727722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.984 [2024-07-13 15:44:57.727747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.984 [2024-07-13 15:44:57.731374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:26.984 [2024-07-13 15:44:57.740461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.984 [2024-07-13 15:44:57.740883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.984 [2024-07-13 15:44:57.740918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:26.984 [2024-07-13 15:44:57.740948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:26.984 [2024-07-13 15:44:57.741229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:26.984 [2024-07-13 15:44:57.741496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.984 [2024-07-13 15:44:57.741522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.984 [2024-07-13 15:44:57.741547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.984 [2024-07-13 15:44:57.745280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.242 [2024-07-13 15:44:57.754571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.755060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.755095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.755126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.755412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.755678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.755714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.755740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.759392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.768479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.768945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.768981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.769011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.769293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.769556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.769583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.769607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.773235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.782535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.782986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.783021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.783050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.783329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.783594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.783620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.783644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.787274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.796575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.797063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.797097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.797127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.797408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.797673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.797700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.797724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.801356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.810590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.811080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.811148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.811430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.811695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.811721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.811747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.815392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.824508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.824959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.824995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.825025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.825308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.825574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.825600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.825625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.829254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.838609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.839112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.839153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.839183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.839470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.839734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.839760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.839786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.843420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.852529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.853006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.853041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.853070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.853356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.853621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.853648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.853674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.857302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.866404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.866875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.866910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.866940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.867223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.867488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.867515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.867540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.871164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.880269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.880749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.880812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.881111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.881376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.881402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.243 [2024-07-13 15:44:57.881426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.243 [2024-07-13 15:44:57.885060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.243 [2024-07-13 15:44:57.894158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.243 [2024-07-13 15:44:57.894630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.243 [2024-07-13 15:44:57.894665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.243 [2024-07-13 15:44:57.894695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.243 [2024-07-13 15:44:57.894985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.243 [2024-07-13 15:44:57.895249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.243 [2024-07-13 15:44:57.895277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.895309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.898937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.908035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.908501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.908534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.908564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.908847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.909122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.909149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.909174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.912791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.921906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.922348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.922383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.922413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.922693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.922967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.922994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.923019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.926640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.935745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.936191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.936226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.936256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.936535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.936800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.936826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.936851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.940476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.949586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.950081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.950117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.950146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.950429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.950694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.950721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.950745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.954377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.963472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.963962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.963997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.964027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.964309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.964574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.964601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.964625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.968251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.977353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.977840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.977884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.977927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.978205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.978469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.978495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.978520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.982151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:57.991260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:57.991682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:57.991716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:57.991746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:57.992047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:57.992313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:57.992339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:57.992364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.244 [2024-07-13 15:44:57.995987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.244 [2024-07-13 15:44:58.005379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.244 [2024-07-13 15:44:58.005883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.244 [2024-07-13 15:44:58.005920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.244 [2024-07-13 15:44:58.005950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.244 [2024-07-13 15:44:58.006249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.244 [2024-07-13 15:44:58.006515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.244 [2024-07-13 15:44:58.006542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.244 [2024-07-13 15:44:58.006567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.503 [2024-07-13 15:44:58.010290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.503 [2024-07-13 15:44:58.019270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.503 [2024-07-13 15:44:58.019744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.503 [2024-07-13 15:44:58.019778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.503 [2024-07-13 15:44:58.019809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.503 [2024-07-13 15:44:58.020105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.503 [2024-07-13 15:44:58.020371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.503 [2024-07-13 15:44:58.020398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.503 [2024-07-13 15:44:58.020423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.503 [2024-07-13 15:44:58.024049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.503 [2024-07-13 15:44:58.033146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.503 [2024-07-13 15:44:58.033613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.503 [2024-07-13 15:44:58.033648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.503 [2024-07-13 15:44:58.033678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.503 [2024-07-13 15:44:58.033970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.503 [2024-07-13 15:44:58.034235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.503 [2024-07-13 15:44:58.034262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.503 [2024-07-13 15:44:58.034294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.503 [2024-07-13 15:44:58.037919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.503 [2024-07-13 15:44:58.047064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.503 [2024-07-13 15:44:58.047534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.503 [2024-07-13 15:44:58.047569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.503 [2024-07-13 15:44:58.047599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.503 [2024-07-13 15:44:58.047894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.503 [2024-07-13 15:44:58.048159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.503 [2024-07-13 15:44:58.048185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.503 [2024-07-13 15:44:58.048210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.503 [2024-07-13 15:44:58.051828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.503 [2024-07-13 15:44:58.060924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.503 [2024-07-13 15:44:58.061367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.503 [2024-07-13 15:44:58.061402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.503 [2024-07-13 15:44:58.061432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.503 [2024-07-13 15:44:58.061715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.503 [2024-07-13 15:44:58.061991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.503 [2024-07-13 15:44:58.062018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.503 [2024-07-13 15:44:58.062043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.503 [2024-07-13 15:44:58.065659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.503 [2024-07-13 15:44:58.074967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.503 [2024-07-13 15:44:58.075419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.503 [2024-07-13 15:44:58.075455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.503 [2024-07-13 15:44:58.075484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.503 [2024-07-13 15:44:58.075766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.503 [2024-07-13 15:44:58.076044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.076082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.076108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.079725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.088815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.089304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.089344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.089375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.089657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.089934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.089961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.089986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.093604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.102690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.103139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.103174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.103204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.103486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.103751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.103777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.103802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.107429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.116727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.117218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.117252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.117282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.117564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.117829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.117855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.117892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.121505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.130593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.131068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.131103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.131133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.131416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.131686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.131713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.131738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.135364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.144486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.144929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.144964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.144994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.145275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.145540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.145566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.145591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.149213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.158518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.158993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.159028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.159058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.159343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.159607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.159634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.159658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.163282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.172370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.172816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.172851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.172892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.173179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.173444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.173470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.173494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.177121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.186211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.186697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.186732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.186762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.187054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.187319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.187345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.187370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.190996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.200093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.200532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.200566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.200597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.200888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.201153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.201179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.201205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.204822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.214039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.214492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.214529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.214560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.504 [2024-07-13 15:44:58.214842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.504 [2024-07-13 15:44:58.215121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.504 [2024-07-13 15:44:58.215148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.504 [2024-07-13 15:44:58.215173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.504 [2024-07-13 15:44:58.218788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.504 [2024-07-13 15:44:58.227890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.504 [2024-07-13 15:44:58.228358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.504 [2024-07-13 15:44:58.228393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.504 [2024-07-13 15:44:58.228432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.505 [2024-07-13 15:44:58.228711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.505 [2024-07-13 15:44:58.228985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.505 [2024-07-13 15:44:58.229012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.505 [2024-07-13 15:44:58.229038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.505 [2024-07-13 15:44:58.232876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.505 [2024-07-13 15:44:58.241761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.505 [2024-07-13 15:44:58.242244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.505 [2024-07-13 15:44:58.242279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.505 [2024-07-13 15:44:58.242308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.505 [2024-07-13 15:44:58.242591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.505 [2024-07-13 15:44:58.242856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.505 [2024-07-13 15:44:58.242893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.505 [2024-07-13 15:44:58.242919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.505 [2024-07-13 15:44:58.246542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.505 [2024-07-13 15:44:58.255728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.505 [2024-07-13 15:44:58.256216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.505 [2024-07-13 15:44:58.256252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.505 [2024-07-13 15:44:58.256281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.505 [2024-07-13 15:44:58.256564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.505 [2024-07-13 15:44:58.256829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.505 [2024-07-13 15:44:58.256856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.505 [2024-07-13 15:44:58.256891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.505 [2024-07-13 15:44:58.260507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.763 [2024-07-13 15:44:58.269842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.270327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.270362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.270392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.270675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.270952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.270985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.271011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.274733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.283831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.284316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.284351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.284381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.284665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.284942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.284969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.284994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.288611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.297703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.298189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.298224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.298254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.298538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.298802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.298829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.298854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.302483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.311576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.312057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.312092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.312122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.312407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.312671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.312698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.312722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.316356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.325476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.325958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.325994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.326024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.326308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.326573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.326600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.326624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.330251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.339348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.339819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.339853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.339892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.340180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.340444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.340470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.340495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.344120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.353218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.353683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.353717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.353748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.354044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.354309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.354336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.354360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.357989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.367094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.367564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.367598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.367628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.367926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.368190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.368217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.368241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.371859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.380962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.381443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.381477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.381506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.381787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.382067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.382094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.382119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.385734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.394837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.395323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.395358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.395387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.395667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.395943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.395970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.764 [2024-07-13 15:44:58.395994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.764 [2024-07-13 15:44:58.399615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.764 [2024-07-13 15:44:58.408704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.764 [2024-07-13 15:44:58.409186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.764 [2024-07-13 15:44:58.409220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.764 [2024-07-13 15:44:58.409251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.764 [2024-07-13 15:44:58.409531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.764 [2024-07-13 15:44:58.409795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.764 [2024-07-13 15:44:58.409822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.409855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.413484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.422571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.423035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.423070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.423100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.423380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.423648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.423675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.423700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.427331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.436423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.436886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.436921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.436950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.437232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.437497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.437524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.437548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.441174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.450268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.450710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.450744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.450773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.451069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.451334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.451360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.451385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.455008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.464151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.464634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.464669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.464699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.464992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.465258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.465285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.465309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.468938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.478035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.478514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.478549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.478579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.478862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.479135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.479162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.479186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.482799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.491920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.492410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.492445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.492475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.492756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.493033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.493060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.493085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.496707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.505815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.506265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.506301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.506331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.506611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.506893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.506919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.506945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.510572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.765 [2024-07-13 15:44:58.519674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:27.765 [2024-07-13 15:44:58.520136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:27.765 [2024-07-13 15:44:58.520171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:27.765 [2024-07-13 15:44:58.520201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:27.765 [2024-07-13 15:44:58.520482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:27.765 [2024-07-13 15:44:58.520747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:27.765 [2024-07-13 15:44:58.520773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:27.765 [2024-07-13 15:44:58.520797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:27.765 [2024-07-13 15:44:58.524449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.533626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.534120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.534156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.534186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.024 [2024-07-13 15:44:58.534469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.024 [2024-07-13 15:44:58.534735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.024 [2024-07-13 15:44:58.534762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.024 [2024-07-13 15:44:58.534786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.024 [2024-07-13 15:44:58.538413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.547511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.547992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.548027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.548057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.024 [2024-07-13 15:44:58.548338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.024 [2024-07-13 15:44:58.548601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.024 [2024-07-13 15:44:58.548628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.024 [2024-07-13 15:44:58.548652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.024 [2024-07-13 15:44:58.552286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.561374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.561860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.561901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.561931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.024 [2024-07-13 15:44:58.562212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.024 [2024-07-13 15:44:58.562479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.024 [2024-07-13 15:44:58.562506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.024 [2024-07-13 15:44:58.562531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.024 [2024-07-13 15:44:58.566159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.575250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.575728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.575763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.575792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.024 [2024-07-13 15:44:58.576085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.024 [2024-07-13 15:44:58.576350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.024 [2024-07-13 15:44:58.576377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.024 [2024-07-13 15:44:58.576401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.024 [2024-07-13 15:44:58.580027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.589118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.589584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.589619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.589649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.024 [2024-07-13 15:44:58.589941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.024 [2024-07-13 15:44:58.590206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.024 [2024-07-13 15:44:58.590232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.024 [2024-07-13 15:44:58.590257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.024 [2024-07-13 15:44:58.593877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.602963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.603431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.603465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.603502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.024 [2024-07-13 15:44:58.603782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.024 [2024-07-13 15:44:58.604066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.024 [2024-07-13 15:44:58.604093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.024 [2024-07-13 15:44:58.604118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.024 [2024-07-13 15:44:58.607735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.024 [2024-07-13 15:44:58.616819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.024 [2024-07-13 15:44:58.617288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.024 [2024-07-13 15:44:58.617323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.024 [2024-07-13 15:44:58.617352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.617633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.617908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.617935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.617960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.621578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.630667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.631142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.631176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.631205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.631486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.631750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.631776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.631801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.635429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.644529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.644993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.645028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.645058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.645341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.645611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.645638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.645663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.649288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.658469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.658941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.658977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.659007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.659287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.659550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.659577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.659601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.663232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.672365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.672949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.672984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.673013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.673295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.673559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.673586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.673610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.677236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.686320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.686799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.686833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.686862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.687158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.687424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.687451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.687476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.691101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.700200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.700650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.700685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.700714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.701008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.701273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.701300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.701325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.704964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.714071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.714549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.714584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.714614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.714915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.715182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.715208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.715233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.718851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.727961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.728401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.728436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.728465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.728748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.729024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.729050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.729076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.732696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.741798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.742247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.742282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.742318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.742598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.742863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.742901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.742925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.746545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.755634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.756089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.025 [2024-07-13 15:44:58.756124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.025 [2024-07-13 15:44:58.756154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.025 [2024-07-13 15:44:58.756435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.025 [2024-07-13 15:44:58.756699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.025 [2024-07-13 15:44:58.756725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.025 [2024-07-13 15:44:58.756750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.025 [2024-07-13 15:44:58.760378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.025 [2024-07-13 15:44:58.769477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.025 [2024-07-13 15:44:58.769951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.026 [2024-07-13 15:44:58.769986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.026 [2024-07-13 15:44:58.770016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.026 [2024-07-13 15:44:58.770300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.026 [2024-07-13 15:44:58.770563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.026 [2024-07-13 15:44:58.770590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.026 [2024-07-13 15:44:58.770614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.026 [2024-07-13 15:44:58.774245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.026 [2024-07-13 15:44:58.783350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.026 [2024-07-13 15:44:58.783831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.026 [2024-07-13 15:44:58.783876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.026 [2024-07-13 15:44:58.783908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.026 [2024-07-13 15:44:58.784192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.026 [2024-07-13 15:44:58.784456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.026 [2024-07-13 15:44:58.784489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.026 [2024-07-13 15:44:58.784514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.026 [2024-07-13 15:44:58.788271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.797337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.797805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.797841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.797882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.798167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.798433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.798460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.798483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.802116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.811207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.811701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.811736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.811766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.812059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.812324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.812351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.812376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.816005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.825099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.825574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.825608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.825637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.825928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.826192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.826219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.826243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.829863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.838965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.839440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.839475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.839504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.839786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.840064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.840090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.840115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.843733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.852833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.853284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.853319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.853349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.853630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.853906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.853933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.853957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.857575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.866888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.867328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.867363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.867393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.867673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.867951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.867978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.868003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.871622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.880758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.881211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.881245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.881275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.881562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.881828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.881855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.881893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.885511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.894604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.895076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.895111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.895141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.895422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.895686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.895712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.895736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.899367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.908460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.908949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.908984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.909014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.909296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.909561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.909587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.909612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.913238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.922331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.922930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.922966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.922995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.923281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.923546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.285 [2024-07-13 15:44:58.923573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.285 [2024-07-13 15:44:58.923605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.285 [2024-07-13 15:44:58.927236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.285 [2024-07-13 15:44:58.936326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.285 [2024-07-13 15:44:58.936769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.285 [2024-07-13 15:44:58.936803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.285 [2024-07-13 15:44:58.936832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.285 [2024-07-13 15:44:58.937125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.285 [2024-07-13 15:44:58.937390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:58.937416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:58.937441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:58.941066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:58.950170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:58.950650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:58.950684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:58.950714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:58.951008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:58.951273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:58.951299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:58.951323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:58.954956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:58.964069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:58.964519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:58.964554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:58.964584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:58.964885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:58.965150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:58.965177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:58.965202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:58.968827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:58.977946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:58.978571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:58.978636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:58.978666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:58.978956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:58.979221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:58.979248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:58.979272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:58.982903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:58.991814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:58.992269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:58.992303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:58.992332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:58.992613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:58.992892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:58.992919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:58.992944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:58.996566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:59.005677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:59.006156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:59.006191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:59.006220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:59.006504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:59.006771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:59.006797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:59.006822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:59.010459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:59.019570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:59.020021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:59.020056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:59.020085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:59.020366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:59.020637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:59.020664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:59.020688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:59.024321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:59.033433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:59.033887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:59.033923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:59.033953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:59.034236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:59.034500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:59.034527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:59.034552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.286 [2024-07-13 15:44:59.038188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.286 [2024-07-13 15:44:59.047400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.286 [2024-07-13 15:44:59.047863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.286 [2024-07-13 15:44:59.047906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.286 [2024-07-13 15:44:59.047936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.286 [2024-07-13 15:44:59.048231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.286 [2024-07-13 15:44:59.048495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.286 [2024-07-13 15:44:59.048521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.286 [2024-07-13 15:44:59.048546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.052256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.061467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.061937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.061973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.062004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.062285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.062549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.062576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.062601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.066241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.075354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.075796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.075830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.075860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.076156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.076421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.076447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.076472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.080119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.089277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.089906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.089969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.089999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.090283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.090548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.090575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.090599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.094239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.103137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.103605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.103640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.103670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.103963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.104228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.104254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.104279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.107907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.117012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.117498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.117533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.117570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.117849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.118124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.118151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.118175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.121796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.130924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.131498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.131552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.131581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.131862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.132143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.132170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.132195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.135815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.144940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.145409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.145443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.145472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.145754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.146030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.146056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.146081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.149697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.158798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.159275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.159310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.159340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.159623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.159899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.159934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.159960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.163577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.172667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.173121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.173156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.173185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.173465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.173730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.173757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.173782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.177417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.186510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.186979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.546 [2024-07-13 15:44:59.187014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.546 [2024-07-13 15:44:59.187043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.546 [2024-07-13 15:44:59.187325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.546 [2024-07-13 15:44:59.187591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.546 [2024-07-13 15:44:59.187618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.546 [2024-07-13 15:44:59.187643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.546 [2024-07-13 15:44:59.191270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.546 [2024-07-13 15:44:59.200375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.546 [2024-07-13 15:44:59.200815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.200850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.200891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.201172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.201437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.201464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.201487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.205115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.214312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.214789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.214824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.214854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.215156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.215421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.215448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.215472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.219099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.228209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.228676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.228711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.228741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.229036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.229302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.229329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.229353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.232980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.242077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.242544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.242578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.242607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.242900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.243164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.243191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.243216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.246837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.255947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.256417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.256452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.256490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.256769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.257046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.257073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.257098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.260715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.269814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.270293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.270327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.270357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.270641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.270920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.270947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.270972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.274590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.283686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.284157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.284192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.284222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.284502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.284766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.284792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.284817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.288447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.547 [2024-07-13 15:44:59.297611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.547 [2024-07-13 15:44:59.298089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.547 [2024-07-13 15:44:59.298124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.547 [2024-07-13 15:44:59.298154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.547 [2024-07-13 15:44:59.298434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.547 [2024-07-13 15:44:59.298697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.547 [2024-07-13 15:44:59.298725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.547 [2024-07-13 15:44:59.298756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.547 [2024-07-13 15:44:59.302389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.821 [2024-07-13 15:44:59.311724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.821 [2024-07-13 15:44:59.312217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.821 [2024-07-13 15:44:59.312253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.821 [2024-07-13 15:44:59.312283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.821 [2024-07-13 15:44:59.312587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.821 [2024-07-13 15:44:59.312888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.821 [2024-07-13 15:44:59.312916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.821 [2024-07-13 15:44:59.312941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.821 [2024-07-13 15:44:59.316804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.821 [2024-07-13 15:44:59.325904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.821 [2024-07-13 15:44:59.326382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.821 [2024-07-13 15:44:59.326418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.821 [2024-07-13 15:44:59.326448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.821 [2024-07-13 15:44:59.326739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.821 [2024-07-13 15:44:59.327034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.821 [2024-07-13 15:44:59.327062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.821 [2024-07-13 15:44:59.327087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.821 [2024-07-13 15:44:59.330774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.821 [2024-07-13 15:44:59.339901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.821 [2024-07-13 15:44:59.340454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.821 [2024-07-13 15:44:59.340508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.821 [2024-07-13 15:44:59.340539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.340821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.341104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.341131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.341156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.344783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.353907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.354529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.354594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.354624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.354918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.355184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.355211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.355237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.358859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.367758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.368245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.368281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.368311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.368592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.368857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.368894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.368920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.372543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.381654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.382105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.382140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.382170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.382453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.382718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.382745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.382769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.386399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.395501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.395941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.395983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.396013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.396306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.396570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.396597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.396621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.400258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.409378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.409825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.409861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.409902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.410183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.410448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.410474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.410499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.414137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.423278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.423811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.423847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.423888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.424181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.424446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.424473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.424498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.428128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.437246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.437721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.437756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.437785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.438081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.438347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.438374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.438406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.442028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.451125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.451601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.451636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.451667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.451958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.452223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.452249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.452273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.455898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.465002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.465485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.465520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.465549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.465831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.466108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.466135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.466160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.469781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.478914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.479382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.479417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.822 [2024-07-13 15:44:59.479447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.822 [2024-07-13 15:44:59.479730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.822 [2024-07-13 15:44:59.480005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.822 [2024-07-13 15:44:59.480037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.822 [2024-07-13 15:44:59.480061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.822 [2024-07-13 15:44:59.483678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.822 [2024-07-13 15:44:59.492809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.822 [2024-07-13 15:44:59.493259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.822 [2024-07-13 15:44:59.493299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.493329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.493609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.493883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.493909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.493934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.497552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.823 [2024-07-13 15:44:59.506691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.823 [2024-07-13 15:44:59.507188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.823 [2024-07-13 15:44:59.507224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.507254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.507534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.507798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.507824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.507849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.511480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.823 [2024-07-13 15:44:59.520585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.823 [2024-07-13 15:44:59.521074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.823 [2024-07-13 15:44:59.521109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.521139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.521425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.521690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.521716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.521741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.525380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.823 [2024-07-13 15:44:59.534479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.823 [2024-07-13 15:44:59.534957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.823 [2024-07-13 15:44:59.534992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.535022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.535305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.535577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.535604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.535628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.539261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.823 [2024-07-13 15:44:59.548358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.823 [2024-07-13 15:44:59.548799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.823 [2024-07-13 15:44:59.548833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.548864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.549161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.549426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.549453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.549478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.553102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.823 [2024-07-13 15:44:59.562402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.823 [2024-07-13 15:44:59.562873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.823 [2024-07-13 15:44:59.562907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.562937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.563218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.563483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.563509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.563533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.567158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:28.823 [2024-07-13 15:44:59.576254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:28.823 [2024-07-13 15:44:59.576720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.823 [2024-07-13 15:44:59.576755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:28.823 [2024-07-13 15:44:59.576784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:28.823 [2024-07-13 15:44:59.577080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:28.823 [2024-07-13 15:44:59.577344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:28.823 [2024-07-13 15:44:59.577371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:28.823 [2024-07-13 15:44:59.577395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:28.823 [2024-07-13 15:44:59.581026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.083 [2024-07-13 15:44:59.590164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.083 [2024-07-13 15:44:59.590643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-07-13 15:44:59.590678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.083 [2024-07-13 15:44:59.590710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.083 [2024-07-13 15:44:59.591035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.083 [2024-07-13 15:44:59.591301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.083 [2024-07-13 15:44:59.591328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.083 [2024-07-13 15:44:59.591352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.083 [2024-07-13 15:44:59.595003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.083 [2024-07-13 15:44:59.604100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.083 [2024-07-13 15:44:59.604573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-07-13 15:44:59.604607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.083 [2024-07-13 15:44:59.604637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.083 [2024-07-13 15:44:59.604932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.083 [2024-07-13 15:44:59.605197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.083 [2024-07-13 15:44:59.605224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.083 [2024-07-13 15:44:59.605248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.083 [2024-07-13 15:44:59.608872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.083 [2024-07-13 15:44:59.617965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.083 [2024-07-13 15:44:59.618432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-07-13 15:44:59.618467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.083 [2024-07-13 15:44:59.618495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.083 [2024-07-13 15:44:59.618778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.083 [2024-07-13 15:44:59.619060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.083 [2024-07-13 15:44:59.619087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.083 [2024-07-13 15:44:59.619112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.083 [2024-07-13 15:44:59.622736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.083 [2024-07-13 15:44:59.631824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.083 [2024-07-13 15:44:59.632269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-07-13 15:44:59.632303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.083 [2024-07-13 15:44:59.632340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.083 [2024-07-13 15:44:59.632619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.083 [2024-07-13 15:44:59.632898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.632925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.632950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.636571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.645681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.646161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.646197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.646227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.646509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.646773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.646799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.646824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.650457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.659549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.660018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.660054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.660083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.660363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.660627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.660654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.660679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.664303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.673398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.673864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.673905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.673934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.674215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.674479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.674511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.674537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.678261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.687364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.687837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.687880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.687912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.688193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.688457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.688484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.688509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.692136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.701238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.701707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.701742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.701771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.702066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.702331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.702357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.702381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.706008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.715166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.715660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.715696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.715727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.716022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.716289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.716316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.716341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.719975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.729093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.729569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.729604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.729633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.729924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.730188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.730216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.730240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.733854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.742952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.743428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.743462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.743492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.743774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.744054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.744081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.744105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.747726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.756822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.757307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.757343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.757373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.757656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.757931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.757959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.757984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.761617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.770714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.771190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.771225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.771254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.771542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.084 [2024-07-13 15:44:59.771807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.084 [2024-07-13 15:44:59.771833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.084 [2024-07-13 15:44:59.771858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.084 [2024-07-13 15:44:59.775487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.084 [2024-07-13 15:44:59.784684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.084 [2024-07-13 15:44:59.785165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-07-13 15:44:59.785202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.084 [2024-07-13 15:44:59.785232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.084 [2024-07-13 15:44:59.785515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.085 [2024-07-13 15:44:59.785794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.085 [2024-07-13 15:44:59.785822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.085 [2024-07-13 15:44:59.785847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.085 [2024-07-13 15:44:59.789585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.085 [2024-07-13 15:44:59.798578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.085 [2024-07-13 15:44:59.799056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-07-13 15:44:59.799091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.085 [2024-07-13 15:44:59.799120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.085 [2024-07-13 15:44:59.799403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.085 [2024-07-13 15:44:59.799668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.085 [2024-07-13 15:44:59.799694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.085 [2024-07-13 15:44:59.799719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.085 [2024-07-13 15:44:59.803350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.085 [2024-07-13 15:44:59.812453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.085 [2024-07-13 15:44:59.812919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-07-13 15:44:59.812954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.085 [2024-07-13 15:44:59.812984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.085 [2024-07-13 15:44:59.813267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.085 [2024-07-13 15:44:59.813531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.085 [2024-07-13 15:44:59.813558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.085 [2024-07-13 15:44:59.813590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.085 [2024-07-13 15:44:59.817214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.085 [2024-07-13 15:44:59.826308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.085 [2024-07-13 15:44:59.826752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-07-13 15:44:59.826786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.085 [2024-07-13 15:44:59.826816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.085 [2024-07-13 15:44:59.827108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.085 [2024-07-13 15:44:59.827376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.085 [2024-07-13 15:44:59.827403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.085 [2024-07-13 15:44:59.827428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.085 [2024-07-13 15:44:59.831051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.085 [2024-07-13 15:44:59.840347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.085 [2024-07-13 15:44:59.840820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-07-13 15:44:59.840856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.085 [2024-07-13 15:44:59.840897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.085 [2024-07-13 15:44:59.841178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.085 [2024-07-13 15:44:59.841442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.085 [2024-07-13 15:44:59.841468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.085 [2024-07-13 15:44:59.841492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.085 [2024-07-13 15:44:59.845204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.344 [2024-07-13 15:44:59.854302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.344 [2024-07-13 15:44:59.854756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-13 15:44:59.854790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.344 [2024-07-13 15:44:59.854819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.344 [2024-07-13 15:44:59.855112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.344 [2024-07-13 15:44:59.855376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.344 [2024-07-13 15:44:59.855403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.344 [2024-07-13 15:44:59.855428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.344 [2024-07-13 15:44:59.859054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.344 [2024-07-13 15:44:59.868146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.344 [2024-07-13 15:44:59.868622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-13 15:44:59.868656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.344 [2024-07-13 15:44:59.868685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.344 [2024-07-13 15:44:59.868980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.344 [2024-07-13 15:44:59.869245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.344 [2024-07-13 15:44:59.869271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.344 [2024-07-13 15:44:59.869296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.344 [2024-07-13 15:44:59.872921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.344 [2024-07-13 15:44:59.882019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.344 [2024-07-13 15:44:59.882457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-13 15:44:59.882492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.344 [2024-07-13 15:44:59.882522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.344 [2024-07-13 15:44:59.882804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.344 [2024-07-13 15:44:59.883081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.344 [2024-07-13 15:44:59.883108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.344 [2024-07-13 15:44:59.883133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.344 [2024-07-13 15:44:59.886752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.344 [2024-07-13 15:44:59.896066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.344 [2024-07-13 15:44:59.896542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-07-13 15:44:59.896577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.344 [2024-07-13 15:44:59.896607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.344 [2024-07-13 15:44:59.896900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.344 [2024-07-13 15:44:59.897164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.344 [2024-07-13 15:44:59.897191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.344 [2024-07-13 15:44:59.897216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.344 [2024-07-13 15:44:59.900836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.909946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.910417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.910452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.910481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.910767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.911045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.911072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.911097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.914719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.923872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.924321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.924356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.924386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.924668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.924945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.924971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.924996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.928620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.937715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.938191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.938225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.938256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.938538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.938804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.938831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.938855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.942488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.951594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.952081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.952116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.952146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.952427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.952692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.952718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.952750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.956378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.965478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.965957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.965993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.966023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.966305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.966569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.966596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.966620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.970253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.979373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.979918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.979953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.979982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.980265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.980529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.980556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.980581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.984203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:44:59.993291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:44:59.993747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:44:59.993782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:44:59.993812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:44:59.994106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:44:59.994371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:44:59.994398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:44:59.994422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:44:59.998047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:45:00.007844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:45:00.008323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:45:00.008367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:45:00.008400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:45:00.008685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:45:00.008967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:45:00.008995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:45:00.009022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:45:00.012650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:45:00.021766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:45:00.022315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:45:00.022351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:45:00.022382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:45:00.022664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:45:00.022948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:45:00.022975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:45:00.023000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:45:00.026620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:45:00.035737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:45:00.036239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:45:00.036275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:45:00.036306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:45:00.036590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:45:00.036856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:45:00.036893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.345 [2024-07-13 15:45:00.036919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.345 [2024-07-13 15:45:00.040538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.345 [2024-07-13 15:45:00.049640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.345 [2024-07-13 15:45:00.050099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-07-13 15:45:00.050134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.345 [2024-07-13 15:45:00.050167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.345 [2024-07-13 15:45:00.050449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.345 [2024-07-13 15:45:00.050724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.345 [2024-07-13 15:45:00.050751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.346 [2024-07-13 15:45:00.050777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.346 [2024-07-13 15:45:00.054416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.346 [2024-07-13 15:45:00.063552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.346 [2024-07-13 15:45:00.064014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-13 15:45:00.064050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.346 [2024-07-13 15:45:00.064081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.346 [2024-07-13 15:45:00.064361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.346 [2024-07-13 15:45:00.064627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.346 [2024-07-13 15:45:00.064653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.346 [2024-07-13 15:45:00.064679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.346 [2024-07-13 15:45:00.068312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.346 [2024-07-13 15:45:00.077415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.346 [2024-07-13 15:45:00.077898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-13 15:45:00.077933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.346 [2024-07-13 15:45:00.077963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.346 [2024-07-13 15:45:00.078247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.346 [2024-07-13 15:45:00.078511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.346 [2024-07-13 15:45:00.078537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.346 [2024-07-13 15:45:00.078561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.346 [2024-07-13 15:45:00.082187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.346 [2024-07-13 15:45:00.091279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.346 [2024-07-13 15:45:00.091750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-13 15:45:00.091785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.346 [2024-07-13 15:45:00.091814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.346 [2024-07-13 15:45:00.092109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.346 [2024-07-13 15:45:00.092376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.346 [2024-07-13 15:45:00.092403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.346 [2024-07-13 15:45:00.092427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.346 [2024-07-13 15:45:00.096054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.346 [2024-07-13 15:45:00.105200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.346 [2024-07-13 15:45:00.105649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-07-13 15:45:00.105684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.346 [2024-07-13 15:45:00.105713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.346 [2024-07-13 15:45:00.106027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.346 [2024-07-13 15:45:00.106293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.346 [2024-07-13 15:45:00.106320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.346 [2024-07-13 15:45:00.106344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.110106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.605 [2024-07-13 15:45:00.119121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.605 [2024-07-13 15:45:00.119632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.605 [2024-07-13 15:45:00.119685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.605 [2024-07-13 15:45:00.119716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.605 [2024-07-13 15:45:00.120011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.605 [2024-07-13 15:45:00.120277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.605 [2024-07-13 15:45:00.120304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.605 [2024-07-13 15:45:00.120330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.123971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.605 [2024-07-13 15:45:00.133162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.605 [2024-07-13 15:45:00.133654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.605 [2024-07-13 15:45:00.133688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.605 [2024-07-13 15:45:00.133724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.605 [2024-07-13 15:45:00.134020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.605 [2024-07-13 15:45:00.134287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.605 [2024-07-13 15:45:00.134313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.605 [2024-07-13 15:45:00.134338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.137962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.605 [2024-07-13 15:45:00.147064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.605 [2024-07-13 15:45:00.147524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.605 [2024-07-13 15:45:00.147558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.605 [2024-07-13 15:45:00.147596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.605 [2024-07-13 15:45:00.147884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.605 [2024-07-13 15:45:00.148154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.605 [2024-07-13 15:45:00.148180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.605 [2024-07-13 15:45:00.148205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.151837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.605 [2024-07-13 15:45:00.160951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.605 [2024-07-13 15:45:00.161422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.605 [2024-07-13 15:45:00.161457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.605 [2024-07-13 15:45:00.161487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.605 [2024-07-13 15:45:00.161775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.605 [2024-07-13 15:45:00.162057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.605 [2024-07-13 15:45:00.162085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.605 [2024-07-13 15:45:00.162110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.165726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.605 [2024-07-13 15:45:00.174817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.605 [2024-07-13 15:45:00.175286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.605 [2024-07-13 15:45:00.175321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.605 [2024-07-13 15:45:00.175352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.605 [2024-07-13 15:45:00.175647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.605 [2024-07-13 15:45:00.175923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.605 [2024-07-13 15:45:00.175950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.605 [2024-07-13 15:45:00.175975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.179600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.605 [2024-07-13 15:45:00.188719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.605 [2024-07-13 15:45:00.189177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.605 [2024-07-13 15:45:00.189213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.605 [2024-07-13 15:45:00.189244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.605 [2024-07-13 15:45:00.189524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.605 [2024-07-13 15:45:00.189790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.605 [2024-07-13 15:45:00.189823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.605 [2024-07-13 15:45:00.189848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.605 [2024-07-13 15:45:00.193478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1257790 Killed "${NVMF_APP[@]}" "$@" 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1258857 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1258857 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1258857 ']' 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:29.606 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.606 [2024-07-13 15:45:00.202592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.205077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.205115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.205146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.205460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.205765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.205794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.205819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.209468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.216502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.216965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.217002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.217032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.217317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.217584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.217610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.217642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.221166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.230043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.230517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.230549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.230576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.230888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.231126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.231150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.231194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.234398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.243516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.243971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.244003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.244031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.244311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.244528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.244550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.244571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.247802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.249283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:29.606 [2024-07-13 15:45:00.249340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.606 [2024-07-13 15:45:00.256746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.257201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.257233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.257260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.257540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.257750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.257772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.257800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.260940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.270062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.270497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.270528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.270554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.270825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.271088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.271111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.271131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.274318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.283364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.283798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.283830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.283857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.284121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.284371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.284393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.284414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.606 [2024-07-13 15:45:00.287568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.289726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:29.606 [2024-07-13 15:45:00.297292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.297762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.297797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.297827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.298112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.298391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.298418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.298444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.606 [2024-07-13 15:45:00.302089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.606 [2024-07-13 15:45:00.311298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.606 [2024-07-13 15:45:00.311782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.606 [2024-07-13 15:45:00.311817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.606 [2024-07-13 15:45:00.311848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.606 [2024-07-13 15:45:00.312129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.606 [2024-07-13 15:45:00.312408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.606 [2024-07-13 15:45:00.312435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.606 [2024-07-13 15:45:00.312460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.607 [2024-07-13 15:45:00.316074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.607 [2024-07-13 15:45:00.320985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:29.607 [2024-07-13 15:45:00.325135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.607 [2024-07-13 15:45:00.325619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.607 [2024-07-13 15:45:00.325656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.607 [2024-07-13 15:45:00.325686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.607 [2024-07-13 15:45:00.325983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.607 [2024-07-13 15:45:00.326238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.607 [2024-07-13 15:45:00.326266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.607 [2024-07-13 15:45:00.326292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.607 [2024-07-13 15:45:00.329925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.607 [2024-07-13 15:45:00.339275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.607 [2024-07-13 15:45:00.339929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.607 [2024-07-13 15:45:00.339966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.607 [2024-07-13 15:45:00.339999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.607 [2024-07-13 15:45:00.340286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.607 [2024-07-13 15:45:00.340554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.607 [2024-07-13 15:45:00.340582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.607 [2024-07-13 15:45:00.340610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.607 [2024-07-13 15:45:00.344276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.607 [2024-07-13 15:45:00.353243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.607 [2024-07-13 15:45:00.353738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.607 [2024-07-13 15:45:00.353770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.607 [2024-07-13 15:45:00.353806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.607 [2024-07-13 15:45:00.354068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.607 [2024-07-13 15:45:00.354315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.607 [2024-07-13 15:45:00.354338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.607 [2024-07-13 15:45:00.354359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.607 [2024-07-13 15:45:00.357537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.607 [2024-07-13 15:45:00.366895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.607 [2024-07-13 15:45:00.367428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.607 [2024-07-13 15:45:00.367462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.607 [2024-07-13 15:45:00.367490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.607 [2024-07-13 15:45:00.367830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.607 [2024-07-13 15:45:00.368088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.607 [2024-07-13 15:45:00.368113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.607 [2024-07-13 15:45:00.368137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.866 [2024-07-13 15:45:00.371659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.866 [2024-07-13 15:45:00.380332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.866 [2024-07-13 15:45:00.380836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.866 [2024-07-13 15:45:00.380908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.866 [2024-07-13 15:45:00.380940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.866 [2024-07-13 15:45:00.381200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.866 [2024-07-13 15:45:00.381422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.866 [2024-07-13 15:45:00.381446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.866 [2024-07-13 15:45:00.381469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.866 [2024-07-13 15:45:00.384599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.866 [2024-07-13 15:45:00.394334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.866 [2024-07-13 15:45:00.394905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.866 [2024-07-13 15:45:00.394940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.866 [2024-07-13 15:45:00.394969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.866 [2024-07-13 15:45:00.395256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.866 [2024-07-13 15:45:00.395536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.866 [2024-07-13 15:45:00.395572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.866 [2024-07-13 15:45:00.395600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.866 [2024-07-13 15:45:00.399153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.866 [2024-07-13 15:45:00.408330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.866 [2024-07-13 15:45:00.408883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.866 [2024-07-13 15:45:00.408916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.866 [2024-07-13 15:45:00.408944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.866 [2024-07-13 15:45:00.409247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.866 [2024-07-13 15:45:00.409512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.866 [2024-07-13 15:45:00.409539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.866 [2024-07-13 15:45:00.409566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.866 [2024-07-13 15:45:00.413131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.866 [2024-07-13 15:45:00.415973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.866 [2024-07-13 15:45:00.416002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.866 [2024-07-13 15:45:00.416030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:29.866 [2024-07-13 15:45:00.416045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:29.866 [2024-07-13 15:45:00.416056] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.866 [2024-07-13 15:45:00.416165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.866 [2024-07-13 15:45:00.416257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:29.866 [2024-07-13 15:45:00.416260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.866 [2024-07-13 15:45:00.421885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.866 [2024-07-13 15:45:00.422420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.866 [2024-07-13 15:45:00.422456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.866 [2024-07-13 15:45:00.422488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.866 [2024-07-13 15:45:00.422770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.866 [2024-07-13 15:45:00.423033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.866 [2024-07-13 15:45:00.423058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.866 [2024-07-13 15:45:00.423083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.866 [2024-07-13 15:45:00.426289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.866 [2024-07-13 15:45:00.435483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.866 [2024-07-13 15:45:00.436075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.866 [2024-07-13 15:45:00.436118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.866 [2024-07-13 15:45:00.436162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.866 [2024-07-13 15:45:00.436444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.866 [2024-07-13 15:45:00.436681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.866 [2024-07-13 15:45:00.436705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.866 [2024-07-13 15:45:00.436732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.866 [2024-07-13 15:45:00.440203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.866 [2024-07-13 15:45:00.449221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.866 [2024-07-13 15:45:00.449890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.866 [2024-07-13 15:45:00.449932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.866 [2024-07-13 15:45:00.449966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.866 [2024-07-13 15:45:00.450236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.866 [2024-07-13 15:45:00.450464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.866 [2024-07-13 15:45:00.450488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.866 [2024-07-13 15:45:00.450514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.453751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.462930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.463522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.463565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.463597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.463911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.464148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.464179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.464220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.467441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.476515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.477007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.477046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.477078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.477379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.477606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.477638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.477663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.480932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.490063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.490699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.490741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.490787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.491093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.491352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.491376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.491401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.494626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.503637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.504069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.504101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.504129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.504414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.504649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.504671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.504701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.507922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.517312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.517713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.517745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.517772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.518042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.518295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.518318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.518340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.521617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.867 [2024-07-13 15:45:00.530842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.531335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.531367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.531404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.531671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.531936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.531960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.531984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.535245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.544457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.544901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.544933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.544961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.867 [2024-07-13 15:45:00.545228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.867 [2024-07-13 15:45:00.545453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.545477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.545497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.867 [2024-07-13 15:45:00.548765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.549582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.867 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.867 [2024-07-13 15:45:00.558036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.558531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.558563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.558597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.558898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.559151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.559189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.559210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.562426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.571593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.572020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.572053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.572081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.572365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.867 [2024-07-13 15:45:00.572612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.867 [2024-07-13 15:45:00.572635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.867 [2024-07-13 15:45:00.572659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.867 [2024-07-13 15:45:00.576001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.867 [2024-07-13 15:45:00.585214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.867 [2024-07-13 15:45:00.585775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.867 [2024-07-13 15:45:00.585813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.867 [2024-07-13 15:45:00.585845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.867 [2024-07-13 15:45:00.586119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.868 [2024-07-13 15:45:00.586378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.868 [2024-07-13 15:45:00.586401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.868 [2024-07-13 15:45:00.586428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.868 Malloc0 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.868 [2024-07-13 15:45:00.589750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.868 [2024-07-13 15:45:00.598820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:29.868 [2024-07-13 15:45:00.599358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.868 [2024-07-13 15:45:00.599390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2330b50 with addr=10.0.0.2, port=4420 00:33:29.868 [2024-07-13 15:45:00.599417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2330b50 is same with the state(5) to be set 00:33:29.868 [2024-07-13 15:45:00.599693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2330b50 (9): Bad file descriptor 00:33:29.868 [2024-07-13 15:45:00.599945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:29.868 [2024-07-13 15:45:00.599969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:29.868 [2024-07-13 15:45:00.599993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:29.868 [2024-07-13 15:45:00.603291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:29.868 [2024-07-13 15:45:00.607280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:29.868 15:45:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1258077 00:33:29.868 [2024-07-13 15:45:00.612345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:30.125 [2024-07-13 15:45:00.775200] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:40.091 00:33:40.091 Latency(us) 00:33:40.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.091 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:40.091 Verification LBA range: start 0x0 length 0x4000 00:33:40.091 Nvme1n1 : 15.01 6709.56 26.21 8689.03 0.00 8287.03 825.27 16990.81 00:33:40.091 =================================================================================================================== 00:33:40.091 Total : 6709.56 26.21 8689.03 0.00 8287.03 825.27 16990.81 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.091 rmmod nvme_tcp 00:33:40.091 rmmod nvme_fabrics 00:33:40.091 rmmod nvme_keyring 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1258857 ']' 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1258857 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1258857 ']' 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1258857 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1258857 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1258857' 00:33:40.091 killing process with pid 1258857 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1258857 00:33:40.091 15:45:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1258857 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:40.091 15:45:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.994 15:45:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:41.994 00:33:41.994 real 0m22.556s 00:33:41.994 user 0m59.778s 00:33:41.994 sys 0m4.563s 00:33:41.994 15:45:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:41.994 15:45:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:41.994 ************************************ 00:33:41.994 END TEST nvmf_bdevperf 00:33:41.994 ************************************ 00:33:41.994 15:45:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:41.994 15:45:12 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:41.994 15:45:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:41.994 15:45:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:41.994 15:45:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:41.994 ************************************ 00:33:41.994 START TEST nvmf_target_disconnect 00:33:41.994 ************************************ 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:41.994 * Looking for test storage... 00:33:41.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:41.994 15:45:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:41.995 15:45:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:43.894 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:43.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:43.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:43.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:43.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:43.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:33:43.895 00:33:43.895 --- 10.0.0.2 ping statistics --- 00:33:43.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.895 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:33:43.895 00:33:43.895 --- 10.0.0.1 ping statistics --- 00:33:43.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.895 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:43.895 ************************************ 00:33:43.895 START TEST nvmf_target_disconnect_tc1 00:33:43.895 ************************************ 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:43.895 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:44.154 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.154 [2024-07-13 15:45:14.720478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.154 [2024-07-13 15:45:14.720552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c13e0 with addr=10.0.0.2, port=4420 00:33:44.154 [2024-07-13 15:45:14.720589] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:44.154 [2024-07-13 15:45:14.720611] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:44.154 [2024-07-13 15:45:14.720625] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:44.154 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:44.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:44.154 Initializing NVMe Controllers 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:44.154 00:33:44.154 real 0m0.101s 00:33:44.154 user 0m0.045s 00:33:44.154 sys 0m0.055s 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:44.154 ************************************ 00:33:44.154 END TEST nvmf_target_disconnect_tc1 00:33:44.154 ************************************ 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:44.154 ************************************ 00:33:44.154 START TEST nvmf_target_disconnect_tc2 00:33:44.154 ************************************ 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1262515 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1262515 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1262515 ']' 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:44.154 15:45:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.154 [2024-07-13 15:45:14.839299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:44.154 [2024-07-13 15:45:14.839385] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.154 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.154 [2024-07-13 15:45:14.878569] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:44.154 [2024-07-13 15:45:14.909047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:44.412 [2024-07-13 15:45:14.997810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.412 [2024-07-13 15:45:14.997901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.412 [2024-07-13 15:45:14.997916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.412 [2024-07-13 15:45:14.997927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.412 [2024-07-13 15:45:14.997951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.412 [2024-07-13 15:45:14.998043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:44.412 [2024-07-13 15:45:14.998122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:44.412 [2024-07-13 15:45:14.998192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:44.412 [2024-07-13 15:45:14.998194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.412 Malloc0 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.412 [2024-07-13 15:45:15.164707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.412 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.669 [2024-07-13 15:45:15.192957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1262653 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:44.669 15:45:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:44.669 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.574 15:45:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1262515 00:33:46.574 15:45:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Write completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.574 starting I/O failed 00:33:46.574 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 [2024-07-13 15:45:17.218030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 [2024-07-13 15:45:17.218474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 [2024-07-13 15:45:17.218763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Write completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 Read completed with error (sct=0, sc=8) 00:33:46.575 starting I/O failed 00:33:46.575 [2024-07-13 15:45:17.219107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.575 [2024-07-13 15:45:17.219343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.219383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.219589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.219617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.219806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.219832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.219985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.220011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.220181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.220206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.220346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.220371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.220523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.220550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.220738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.220764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.220984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.221010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.221172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.221198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.221448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.221474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.221636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.221661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.221823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.221849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.222028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.222054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.222227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.222253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.222444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.222470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.222686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.222728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.222922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.222949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.223090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.223116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.223304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.223334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.223458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.223484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.223646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.223688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.223876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.223902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.224032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.224058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.224228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.224254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.575 qpair failed and we were unable to recover it. 00:33:46.575 [2024-07-13 15:45:17.224461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.575 [2024-07-13 15:45:17.224487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.224696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.224746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.224951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.224977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.225145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.225172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.225372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.225398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.225566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.225592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.225729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.225756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.225963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.225990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.226137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.226163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.226349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.226375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.226584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.226612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.226791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.226820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.227017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.227043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.227182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.227208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.227367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.227401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.227606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.227632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.227793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.227818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.228008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.228035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.228199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.228224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.228383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.228409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.228549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.228590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.228838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.228889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.229054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.229080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.229243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.229269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.229435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.229461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.229595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.229620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.229778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.229804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.229976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.230003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.230171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.230197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.230333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.230359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.230556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.230581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.230732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.230758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.230909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.230937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.231174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.231200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.231414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.231445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.231619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.231644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.231811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.231838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.232010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.232037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.232181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.232207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.232377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.232404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.232568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.232594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.232731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.232758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.233000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.233027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.233192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.233219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.233372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.233398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.233617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.233658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.233820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.233846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.233997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.234036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.234240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.234267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.234430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.234456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.234621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.234647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.234805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.234832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.235001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.235027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.235167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.235193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.235354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.235379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.235533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.235559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.235712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.235738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.235906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.235933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.236092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.236117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.236282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.236308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.236437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.236462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.236607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.236633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.576 [2024-07-13 15:45:17.236791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.576 [2024-07-13 15:45:17.236816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.576 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.236991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.237016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.237151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.237178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.237333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.237359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.237485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.237511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.237695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.237721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.237890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.237929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.238077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.238105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.238316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.238342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.238533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.238559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.238687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.238715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.238877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.238904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.239071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.239103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.239265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.239292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.239458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.239484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.239646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.239672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.239819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.239859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.240028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.240054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.240203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.240229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.240388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.240414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.240583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.240609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.240800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.240825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.240960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.240986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.241177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.241203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.241390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.241416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.241575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.241615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.241819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.241845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.241996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.242024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.242168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.242193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.242378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.242403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.242564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.242589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.242747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.242773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.242909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.242935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.243102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.243128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.243312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.243337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.243470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.243495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.243683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.243708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.243847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.243878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.244016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.244238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.244263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.244421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.244445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.244604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.244629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.244815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.244844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.245011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.245051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.245266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.245297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.245510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.245559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.245852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.245892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.246110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.246139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.246291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.246321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.246536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.246566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.246803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.246846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.247087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.247126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.247421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.247478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.577 [2024-07-13 15:45:17.247737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.577 [2024-07-13 15:45:17.247763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.577 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.247925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.247951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.248091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.248115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.248281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.248306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.248494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.248519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.248656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.248681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.248836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.248862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.249030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.249055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.249216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.249241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.249383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.249408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.249594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.249619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.249777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.249802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.249982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.250008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.250179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.250204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.250362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.250387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.250576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.250601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.250763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.250807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.250986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.251012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.251151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.251177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.251345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.251371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.251508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.251533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.251724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.251749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.251889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.251916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.252085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.252111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.252276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.252319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.252502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.252528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.252702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.252742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.252881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.252910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.253070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.253095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.253288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.253315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.253583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.253632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.253843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.253876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.254063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.254089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.254268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.254297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.254512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.254538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.254724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.254749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.254887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.254915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.255077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.255103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.255366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.255417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.255811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.255877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.256086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.256112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.256297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.256323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.256482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.256508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.256669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.256695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.256856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.256896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.257022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.257048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.257211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.257236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.257427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.257453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.257615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.257641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.257825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.257854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.258044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.258071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.258236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.258262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.258444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.258470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.258635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.258663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.258850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.258886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.259070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.259096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.259232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.259259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.259414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.259440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.259624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.259649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.259803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.259829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.259964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.259990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.260112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.260137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.260321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.260346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.578 qpair failed and we were unable to recover it. 00:33:46.578 [2024-07-13 15:45:17.260498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.578 [2024-07-13 15:45:17.260523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.260705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.260730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.260902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.260927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.261085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.261110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.261231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.261256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.261496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.261522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.261710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.261735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.261888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.261914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.262073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.262099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.262258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.262284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.262469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.262494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.262621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.262647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.262797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.262821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.262981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.263007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.263168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.263196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.263388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.263413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.263576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.263605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.263741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.263766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.263959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.263985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.264222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.264247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.264436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.264461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.264590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.264616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.264788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.264813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.264970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.264996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.265158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.265183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.265338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.265363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.265601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.265626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.265813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.265838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.266030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.266055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.266218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.266243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.266380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.266405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.266530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.266556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.266714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.266739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.266934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.266960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.267101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.267126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.267266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.267291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.267478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.267503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.267666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.267691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.267861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.267891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.268077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.268102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.268290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.268315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.268472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.268497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.268663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.268688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.268888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.268915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.269078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.269103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.269263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.269288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.269423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.269449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.269634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.269659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.269794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.269819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.269978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.270003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.270136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.270161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.270321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.270346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.270534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.270559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.270717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.270742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.270882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.270908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.271063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.271089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.271275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.271446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.271471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.271633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.271658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.271820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.271846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.272045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.272071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.272208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.272233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.272392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.272417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.272602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.579 [2024-07-13 15:45:17.272627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.579 qpair failed and we were unable to recover it. 00:33:46.579 [2024-07-13 15:45:17.272790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.272815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.272977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.273004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.273163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.273188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.273347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.273373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.273535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.273560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.273726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.273752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.273917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.273943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.274107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.274133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.274268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.274293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.274449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.274475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.274632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.274657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.274829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.274855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.274992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.275018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.275204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.275230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.275371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.275398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.275536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.275562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.275726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.275752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.275915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.275942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.276105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.276131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.276321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.276346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.276529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.276554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.276713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.276738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.276905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.276932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.277062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.277088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.277287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.277313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.277471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.277496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.277688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.277713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.277876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.277902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.278063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.278089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.278252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.278277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.278438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.278463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.278601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.278626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.278798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.278828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.279001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.279028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.279226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.279251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.279413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.279439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.279627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.279652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.279794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.279819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.279958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.279985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.280171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.280197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.280335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.280362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.280536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.280563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.280755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.280781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.280942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.280968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.281152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.281177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.281350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.281374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.281566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.281591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.281774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.281799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.281971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.281996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.282132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.282157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.282291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.282316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.282475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.282499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.282666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.282691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.282853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.282887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.283069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.283094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.580 [2024-07-13 15:45:17.283282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.580 [2024-07-13 15:45:17.283307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.580 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.283468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.283493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.283678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.283703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.283869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.283894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.284029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.284055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.284241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.284265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.284403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.284428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.284585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.284609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.284744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.284769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.284904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.284930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.285092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.285118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.285280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.285305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.285460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.285485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.285611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.285636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.285831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.285856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.286033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.286058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.286202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.286227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.286416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.286446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.286608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.286632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.286793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.286818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.286972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.286998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.287182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.287207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.287391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.287573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.287598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.287750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.287775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.287935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.287961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.288122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.288147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.288306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.288331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.288523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.288548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.288702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.288727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.288889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.288914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.289083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.289110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.289282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.289307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.289488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.289675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.289700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.289893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.289919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.290082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.290107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.290273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.290476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.290501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.290631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.290658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.290791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.290816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.291004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.291030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.291212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.291237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.291398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.291422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.291583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.291608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.291798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.291823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.291966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.291993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.292157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.292182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.292375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.292400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.292597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.292623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.292785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.292811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.292973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.292999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.293141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.293166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.293340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.293365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.293531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.293556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.293715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.293741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.293907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.293934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.294062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.294091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.294251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.294276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.294459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.294484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.294638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.294663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.294821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.294846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.295013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.295039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.295174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.295199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.581 [2024-07-13 15:45:17.295383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.581 [2024-07-13 15:45:17.295408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.581 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.295600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.295625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.295808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.295834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.296002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.296028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.296209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.296234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.296396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.296422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.296578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.296603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.296745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.296771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.296929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.296955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.297114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.297140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.297325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.297350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.297510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.297536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.297696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.297722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.297881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.297906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.298094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.298119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.298259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.298284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.298471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.298496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.298659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.298684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.298850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.298880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.299039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.299064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.299194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.299224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.299384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.299410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.299545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.299572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.299727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.299753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.299918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.299944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.300135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.300160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.300348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.300373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.300526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.300552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.300716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.300741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.300863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.300893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.301060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.301085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.301282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.301307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.301466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.301492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.301680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.301706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.301896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.301922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.302056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.302082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.302271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.302296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.302482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.302507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.302667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.302692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.302851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.302882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.303038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.303063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.303217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.303243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.303380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.303405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.303563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.303588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.303720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.303747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.303913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.303939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.304072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.304098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.304258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.304284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.304444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.304469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.304597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.304622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.304782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.304807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.304947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.304974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.305134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.305159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.305344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.305369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.305507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.305532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.305695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.305720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.305884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.305910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.306097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.306122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.306254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.306279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.306440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.306465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.306621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.306651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.306813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.306838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.307001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.307026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.307163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.307189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-13 15:45:17.307318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.582 [2024-07-13 15:45:17.307343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.307535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.307560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.307760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.307788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.307955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.307982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.308175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.308200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.308364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.308390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.308549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.308575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.308758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.308784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.308939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.308965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.309095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.309122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.309310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.309336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.309503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.309528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.309687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.309714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.309852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.309882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.310071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.310096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.310231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.310256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.310409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.310435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.310594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.310619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.310783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.310809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.310966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.310991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.311151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.311176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.311296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.311322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.311477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.311502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.311691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.311717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.311875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.312066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.312091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.312284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.312310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.312450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.312475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.312610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.312635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.312774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.312799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.312962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.312987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.313147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.313173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.313327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.313352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.313516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.313541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.313728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.313756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.313932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.313958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.314145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.314175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.314316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.314342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.314513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.314539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.314722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.314748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.314911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.314937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.315094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.315119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.315279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.315305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.315442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.315467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.315629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.315655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.315815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.315840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.315994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.316020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.316179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.316204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.316368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.316393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.316528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.316554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.316723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.316748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.316916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.316943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.317112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.317137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.317272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.317457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.317482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.317647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.317673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.317835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.317860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.317997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.318024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.318191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.583 [2024-07-13 15:45:17.318216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-13 15:45:17.318403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.318429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.318587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.318613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.318753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.318782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.318945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.318971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.319131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.319156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.319311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.319336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.319461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.319486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.319621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.319646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.319807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.319832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.320029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.320054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.320212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.320238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.320422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.320447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.320633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.320658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.320797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.320822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.321020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.321046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.321210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.321235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.321371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.321398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.321560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.321591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.321733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.321759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.321947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.321974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.322136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.322161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.322318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.322343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.322504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.322530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.322716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.322741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.322929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.322955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.323111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.323136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.323326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.323351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.323510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.323535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.323723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.323748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.323906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.323931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.324053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.324078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.324241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.324267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.324435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.324461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.324593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.324619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.324806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.324833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.325018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.325044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.325202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.325228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.325388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.325413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.325574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.325601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.325760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.325787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.325928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.325955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.326142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.326168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.326305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.326470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.326495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.326668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.326693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.326821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.326846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.327039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.327064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.327194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.327220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.327403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.327428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.327587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.327612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.327765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.327791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.327935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.327960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.328116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.328141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.328301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.328327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.328489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.328514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.328652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.328679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.328834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.328859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.329023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.329052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.329205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.329230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.329398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.329422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.329551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.329577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.329736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.329763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.329897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.329924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.330079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.584 [2024-07-13 15:45:17.330104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-13 15:45:17.330243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.330270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.330428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.330454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.330613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.330639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.330803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.330828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.330959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.330985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.331167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.331192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.331349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.331374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.331539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.331565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.331726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.331751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.331888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.331913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.332098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.332123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.332282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.332307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.332464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.332489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.332675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.332701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.332858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.332888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.333046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.333073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.333232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.333259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.333416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.333441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.585 [2024-07-13 15:45:17.333600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.585 [2024-07-13 15:45:17.333625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.585 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.333790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.333815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.333950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.333977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.334130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.334156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.334316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.334342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.334483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.334508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.334644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.334670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.334837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.334863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.334996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.335021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.335210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.335235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.335376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.335401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.335582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.335608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.335793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.335818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.335982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.336008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.336195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.336220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.336406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.336435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.336592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.336618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.336783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.336811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.337015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.337041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.337207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.337232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.337371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.337397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.337584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.337609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.337789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.337815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.337979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.338005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.338160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.338185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.338320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.338345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.338499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.858 [2024-07-13 15:45:17.338525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.858 qpair failed and we were unable to recover it. 00:33:46.858 [2024-07-13 15:45:17.338681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.338706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.338892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.338917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.339076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.339101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.339263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.339288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.339444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.339469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.339628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.339653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.339779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.339804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.339957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.339983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.340136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.340161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.340298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.340323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.340482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.340509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.340639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.340664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.340852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.340882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.341071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.341096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.341236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.341262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.341386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.341411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.341571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.341596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.341758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.341783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.341938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.341964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.342152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.342177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.342336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.342362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.342547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.342572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.342728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.342754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.342879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.343066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.343091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.343226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.343252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.343411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.343437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.343622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.343648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.343832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.343861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.344025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.344051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.344213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.344238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.344402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.344429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.344591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.344617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.344771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.344797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.344950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.344975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.345138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.345163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.345319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.345344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.345511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.345536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.345688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.345714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.345851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.345881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.346022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.346047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.346207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.346233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.346403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.346430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.346589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.346615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.346767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.346809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.346988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.347013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.347174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.347199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.347360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.347386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.347547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.347572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.347733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.347760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.347926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.347952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.348111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.348136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.348271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.348296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.348452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.348477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.348637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.348662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.348828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.348853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.349018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.349043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.349173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.349198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.349383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.349408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.349538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.349564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.349722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.349747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.349945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.349972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.350111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.350136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.859 qpair failed and we were unable to recover it. 00:33:46.859 [2024-07-13 15:45:17.350298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.859 [2024-07-13 15:45:17.350325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.350486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.350511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.350640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.350667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.350829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.350854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.351023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.351048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.351211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.351241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.351403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.351432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.351597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.351622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.351805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.351830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.351995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.352021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.352204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.352229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.352414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.352440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.352599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.352625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.352760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.352801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.352963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.352988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.353152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.353177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.353336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.353361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.353526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.353552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.353711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.353737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.353884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.353910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.354099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.354125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.354259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.354284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.354424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.354450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.354610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.354635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.354793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.354820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.354982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.355008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.355167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.355192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.355351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.355376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.355518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.355544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.355673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.355698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.355860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.355890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.356049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.356075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.356244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.356270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.356457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.356483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.356607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.356632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.356817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.356842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.357003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.357028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.357192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.357217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.357353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.357380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.357568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.357593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.357777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.357802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.357968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.357994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.358128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.358154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.358309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.358335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.358495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.358521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.358688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.358717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.358857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.358889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.359079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.359104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.359239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.359265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.359421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.359446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.359582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.359607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.359770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.359795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.359953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.359978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.360118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.360143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.360306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.360333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.360466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.360492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.360677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.360703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.360869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.360895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.361028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.361054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.361246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.361272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.361429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.361454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.361619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.361644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.361802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.361827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.860 qpair failed and we were unable to recover it. 00:33:46.860 [2024-07-13 15:45:17.361968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.860 [2024-07-13 15:45:17.361994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.362127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.362152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.362288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.362314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.362446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.362472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.362611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.362636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.362792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.362817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.362979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.363005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.363156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.363181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.363311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.363336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.363507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.363532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.363684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.363709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.363871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.363897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.364085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.364111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.364272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.364298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.364441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.364466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.364655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.364680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.364816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.364842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.365013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.365038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.365197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.365224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.365381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.365406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.365566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.365591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.365778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.365803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.365949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.365979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.366176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.366201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.366336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.366363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.366525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.366551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.366713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.366739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.366897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.366923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.367088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.367113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.367274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.367298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.367458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.367483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.367647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.367673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.367834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.367859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.367993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.368019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.368182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.368207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.368347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.368372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.368512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.368537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.368720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.368745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.368899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.368925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.369113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.369139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.369300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.369325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.369485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.369510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.369670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.369697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.369858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.369889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.370076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.370101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.370259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.370284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.370443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.370469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.370630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.370655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.370818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.370843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.371012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.371038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.371190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.371215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.371369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.371394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.371527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.371552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.371714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.371739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.371899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.861 [2024-07-13 15:45:17.371925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.861 qpair failed and we were unable to recover it. 00:33:46.861 [2024-07-13 15:45:17.372088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.372114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.372278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.372303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.372465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.372490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.372651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.372676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.372860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.372890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.373047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.373072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.373259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.373284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.373446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.373477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.373636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.373662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.373800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.373826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.374000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.374027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.374181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.374207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.374395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.374421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.374607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.374631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.374817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.374842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.375045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.375071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.375229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.375255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.375416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.375441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.375628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.375654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.375816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.375841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.376004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.376030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.376188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.376213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.376372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.376397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.376533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.376558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.376721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.376746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.376912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.376938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.377102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.377128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.377284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.377309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.377494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.377519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.377704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.377729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.377918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.377944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.378103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.378128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.378281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.378306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.378466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.378491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.378655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.378681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.378842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.378872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.379002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.379028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.379189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.379214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.379377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.379402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.379562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.379588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.379748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.379774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.379939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.379964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.380151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.380177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.380306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.380332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.380482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.380508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.380694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.380719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.380858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.380888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.381075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.381104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.381271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.381296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.381451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.381476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.381667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.381692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.381893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.381919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.382079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.382105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.382259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.382284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.382415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.382440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.382598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.382623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.382789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.382814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.382999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.383024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.383183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.383208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.383392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.383417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.383572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.383596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.383741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.383766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.383931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.383958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.384121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.384147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.862 qpair failed and we were unable to recover it. 00:33:46.862 [2024-07-13 15:45:17.384307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.862 [2024-07-13 15:45:17.384332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.384469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.384495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.384677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.384702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.384834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.384859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.385053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.385078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.385237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.385262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.385448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.385474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.385658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.385683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.385884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.385911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.386066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.386092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.386256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.386283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.386444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.386469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.386630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.386655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.386839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.386869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.387034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.387059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.387244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.387269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.387403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.387428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.387558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.387583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.387764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.387789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.387973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.388011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.388159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.388185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.388325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.388350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.388544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.388569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.388722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.388747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.388922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.388949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.389105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.389130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.389258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.389283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.389467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.389491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.389684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.389709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.389876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.389901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.390042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.390066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.390224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.390249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.390406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.390431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.390620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.390645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.390809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.390834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.391004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.391029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.391192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.391217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.391373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.391403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.391565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.391590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.391755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.391781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.391957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.391996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.392164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.392190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.392354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.392379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.392541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.392567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.392757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.392783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.392973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.392999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.393126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.393151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.393291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.393317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.393477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.393503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.393692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.393720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.393886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.393911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.394056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.394081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.394268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.394293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.394454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.394478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.394627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.394652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.394792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.394817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.394979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.395004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.395139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.395164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.395325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.395350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.395482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.395508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.395661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.395687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.395845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.395875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.396037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.396062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.396227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.396252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.863 [2024-07-13 15:45:17.396438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.863 [2024-07-13 15:45:17.396467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.863 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.396628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.396653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.396813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.396838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.397000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.397027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.397188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.397213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.397375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.397400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.397557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.397582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.397735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.397760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.397924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.397949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.398137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.398162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.398315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.398339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.398491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.398516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.398674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.398699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.398829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.398854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.399025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.399050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.399205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.399230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.399391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.399416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.399578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.399603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.399767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.399792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.399920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.399946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.400109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.400134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.400297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.400322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.400509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.400534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.400719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.400747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.400928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.400954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.401090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.401115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.401278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.401303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.401460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.401490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.401681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.401706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.401896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.402109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.402134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.402322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.402347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.402534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.402559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.402715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.402739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.402903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.402929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.403089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.403115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.403281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.403306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.403437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.403462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.403632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.403657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.403793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.403819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.403958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.403983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.404145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.404171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.404360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.404386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.404582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.404607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.404748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.404774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.404934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.404961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.405116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.405142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.405297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.405322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.405472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.405498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.405689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.405714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.405839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.405864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.406068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.406093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.406231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.406256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.406418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.406443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.406607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.406632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.406822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.406848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.407019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.407044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.407165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.407190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.864 qpair failed and we were unable to recover it. 00:33:46.864 [2024-07-13 15:45:17.407324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.864 [2024-07-13 15:45:17.407349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.407540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.407565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.407752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.407777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.407935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.407961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.408096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.408121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.408306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.408331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.408487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.408512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.408673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.408697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.408857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.408894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.409058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.409083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.409284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.409315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.409475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.409500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.409633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.409658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.409854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.409884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.410022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.410047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.410213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.410238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.410424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.410449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.410633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.410658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.410788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.410813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.410978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.411004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.411190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.411215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.411382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.411407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.411574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.411599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.411758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.411783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.411914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.411940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.412069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.412094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.412231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.412256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.412382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.412407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.412572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.412598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.412786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.412812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.412972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.412999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.413150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.413176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.413339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.413364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.413524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.413549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.413709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.413735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.413897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.413923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.414084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.414109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.414250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.414280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.414463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.414488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.414617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.414658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.414845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.414877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.415043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.415068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.415220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.415245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.415405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.415432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.415571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.415596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.415780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.415805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.415938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.415963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.416127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.416151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.416341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.416366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.416553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.416579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.416740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.416766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.416952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.416979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.417114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.417139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.417297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.417322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.417454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.417479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.417639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.417664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.417789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.417814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.417966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.417992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.418125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.418150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.418334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.418359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.418519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.418545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.418709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.418734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.418858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.418889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.419027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.419052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.419211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.419237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.865 [2024-07-13 15:45:17.419407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.865 [2024-07-13 15:45:17.419432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.865 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.419587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.419612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.419773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.419798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.419999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.420025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.420181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.420207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.420341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.420366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.420556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.420582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.420716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.420741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.420906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.420933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.421067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.421093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.421225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.421250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.421413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.421438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.421563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.421588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.421747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.421776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.421951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.421976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.422162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.422188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.422322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.422347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.422500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.422525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.422650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.422674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.422808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.422833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.422999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.423025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.423213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.423238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.423399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.423425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.423584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.423610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.423753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.423778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.423910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.423936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.424124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.424149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.424309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.424334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.424521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.424546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.424700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.424725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.424889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.424916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.425080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.425105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.425271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.425296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.425460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.425485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.425612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.425637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.425824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.425849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.426000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.426025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.426189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.426214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.426367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.426393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.426580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.426605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.426765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.426790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.426988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.427014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.427155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.427180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.427367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.427392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.427517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.427542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.427706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.427732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.427857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.428042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.428067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.428252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.428277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.428431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.428457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.428641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.428666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.428825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.428849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.429026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.429052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.429237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.429262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.429421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.429446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.429609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.429633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.429819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.429844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.866 qpair failed and we were unable to recover it. 00:33:46.866 [2024-07-13 15:45:17.430042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.866 [2024-07-13 15:45:17.430067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.430226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.430251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.430433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.430459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.430644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.430669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.430834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.430859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.431026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.431051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.431236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.431261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.431422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.431448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.431633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.431658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.431841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.431872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.432036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.432061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.432224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.432249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.432412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.432600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.432626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.432781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.432806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.432942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.432969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.433100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.433125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.433309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.433334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.433518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.433543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.433699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.433724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.433947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.433973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.434132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.434157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.434316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.434341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.434527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.434553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.434688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.434718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.434882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.434907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.435044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.435069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.435223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.435248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.435414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.435439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.435595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.435621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.435787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.435811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.435998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.436024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.436157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.436182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.436368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.436393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.436553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.436578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.436736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.436761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.436895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.436922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.437083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.437109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.437252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.437278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.437410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.437436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.437575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.437600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.437784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.437809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.437967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.437992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.438151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.438176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.438315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.438339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.438525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.438550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.438704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.438730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.438890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.438916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.439049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.439074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.439196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.439221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.439407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.439432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.439595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.439620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.439780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.439805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.439997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.440023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.440210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.440235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.440372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.440397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.440552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.440577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.440743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.440768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.440927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.440954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.441091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.441116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.441242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.441268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.441423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.441449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.441610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.441635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.441789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.867 [2024-07-13 15:45:17.441814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.867 qpair failed and we were unable to recover it. 00:33:46.867 [2024-07-13 15:45:17.441954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.441980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.442177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.442360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.442385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.442547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.442572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.442750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.442778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.442999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.443025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.443177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.443202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.443362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.443387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.443549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.443574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.443703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.443728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.443881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.443906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.444063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.444089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.444231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.444256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.444419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.444445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.444609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.444635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.444818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.444844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.445016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.445042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.445204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.445230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.445386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.445411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.445565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.445590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.445726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.445751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.445906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.446094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.446119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.446253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.446278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.446436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.446461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.446643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.446668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.446817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.446842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.447032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.447058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.447188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.447220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.447404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.447428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.447563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.447588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.447741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.447766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.447922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.447948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.448088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.448113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.448242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.448267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.448399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.448424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.448562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.448589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.448754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.448779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.448914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.448941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.449068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.449093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.449248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.449273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.449436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.449461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.449630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.449656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.449811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.449838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.450048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.450074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.450233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.450258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.450415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.450440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.450631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.450656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.450837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.450862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.451030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.451182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.451207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.451363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.451388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.451528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.451553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.451704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.451729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.451870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.451897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.452039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.452064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.452229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.452254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.452420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.452446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.452611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.452636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.452821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.452846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.453022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.453048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.453206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.453231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.453396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.453421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.868 [2024-07-13 15:45:17.453553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.868 [2024-07-13 15:45:17.453578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.868 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.453742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.453767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.453902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.453928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.454069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.454095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.454236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.454262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.454396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.454421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.454583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.454613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.454777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.454802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.454963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.454989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.455150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.455175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.455335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.455518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.455543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.455696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.455722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.455896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.455923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.456091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.456117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.456280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.456305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.456442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.456467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.456631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.456656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.456791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.456816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.457003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.457030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.457216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.457241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.457427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.457452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.457605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.457631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.457782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.457807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.457947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.457972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.458137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.458162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.458314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.458339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.458467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.458492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.458653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.458678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.458833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.458858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.458996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.459022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.459189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.459214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.459400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.459425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.459554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.459582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.459756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.459781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.459945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.459970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.460157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.460182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.460312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.460337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.460500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.460525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.460644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.460669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.460826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.460851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.460995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.461022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.461205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.461230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.461389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.461414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.461579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.461605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.461741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.461766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.461933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.461958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.462135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.462173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.462376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.462403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.462543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.462570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.462706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.462733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.462906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.462933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.463088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.463113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.463273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.463298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.463457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.463482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.463643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.463668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.463826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.463851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.464018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.869 [2024-07-13 15:45:17.464043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.869 qpair failed and we were unable to recover it. 00:33:46.869 [2024-07-13 15:45:17.464179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.464205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.464372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.464397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.464563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.464594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.464763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.464787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.464917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.464941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.465077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.465100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.465288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.465312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.465499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.465523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.465659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.465683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.465874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.465899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.466085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.466109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.466261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.466286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.466454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.466478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.466639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.466664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.466828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.466854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.467018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.467043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.467183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.467207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.467364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.467388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.467553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.467577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.467734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.467758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.467921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.467947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.468078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.468103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.468241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.468267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.468429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.468455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.468617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.468643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.468795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.468820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.468955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.468981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.469138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.469164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.469326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.469351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.469516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.469541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.469700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.469725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.469887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.469913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.470055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.470080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.470211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.470236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.470377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.470402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.470586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.470611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.470743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.470768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.470908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.470934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.471097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.471123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.471297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.471322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.471506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.471531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.471714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.471739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.471881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.471911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.472048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.472073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.472245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.472270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.472403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.472428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.472590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.472616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.472790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.472819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.472972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.473000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.473160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.473185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.473338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.473364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.473500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.473526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.473700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.473726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.473908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.473934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.474107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.474131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.474284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.474310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.474498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.474524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.474665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.474690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.474853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.474884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.475055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.475079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.475240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.475265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.475432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.475457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.870 [2024-07-13 15:45:17.475619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.870 [2024-07-13 15:45:17.475646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.870 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.475810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.475835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.475984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.476177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.476202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.476341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.476367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.476557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.476582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.476717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.476743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.476917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.476943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.477118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.477144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.477309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.477334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.477493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.477520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.477654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.477679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.477843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.477874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.478039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.478065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.478194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.478218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.478381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.478406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.478590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.478615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.478757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.478782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.478914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.478940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.479097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.479121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.479270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.479300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.479486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.479511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.479685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.479709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.479893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.479918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.480065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.480091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.480225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.480249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.480382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.480407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.480563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.480589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.480753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.480778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.480944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.480969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.481136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.481160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.481298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.481322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.481506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.481531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.481657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.481682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.481824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.481849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.482015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.482040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.482196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.482220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.482359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.482386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.482570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.482596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.482722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.482747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.482908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.482934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.483067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.483094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.483231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.483257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.483411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.483437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.483578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.483603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.483786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.483810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.483954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.483980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.484132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.484157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.484296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.484321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.484457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.484482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.484634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.484659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.484829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.484854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.485044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.485069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.485222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.485248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.485396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.485421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.485610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.485635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.485758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.485784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.485943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.485969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.486136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.486162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.486316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.486341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.486501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.486531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.486697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.486722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.486910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.486937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.487100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.871 [2024-07-13 15:45:17.487125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.871 qpair failed and we were unable to recover it. 00:33:46.871 [2024-07-13 15:45:17.487282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.487307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.487466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.487492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.487651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.487678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.487836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.487861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.488027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.488052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.488213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.488239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.488386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.488411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.488575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.488601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.488739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.488764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.488930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.488956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.489096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.489123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.489255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.489280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.489447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.489472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.489628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.489653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.489815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.489840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.489980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.490006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.490170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.490195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.490322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.490348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.490534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.490559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.490699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.490724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.490889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.490914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.491046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.491071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.491203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.491228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.491405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.491431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.491594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.491620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.491773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.491798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.491932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.491957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.492147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.492172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.492337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.492362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.492552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.492577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.492732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.492757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.492947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.492973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.493163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.493188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.493371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.493396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.493552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.493577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.493718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.493745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.493890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.493920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.494084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.494109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.494277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.494302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.494439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.494464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.494586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.494611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.494751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.494777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.494937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.494963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.495128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.495153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.495307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.495332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.495462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.495487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.495621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.495646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.495782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.495807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.495969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.495995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.496149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.496175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.496337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.496363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.496495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.496521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.496678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.496704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.496863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.496893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.497026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.497051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.497190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.497215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.872 [2024-07-13 15:45:17.497387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.872 [2024-07-13 15:45:17.497412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.872 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.497576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.497603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.497739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.497764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.497950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.497976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.498105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.498131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.498285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.498311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.498480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.498507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.498677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.498703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.498876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.498901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.499074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.499099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.499257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.499282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.499442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.499467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.499605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.499630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.499782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.499809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.499988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.500014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.500152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.500178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.500351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.500376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.500537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.500562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.500694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.500719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.500882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.500908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.501072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.501097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.501287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.501312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.501447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.501472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.501622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.501647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.501802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.501826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.501982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.502007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.502153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.502179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.502366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.502392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.502554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.502579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.502718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.502745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.502886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.502911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.503084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.503109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.503297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.503322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.503457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.503484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.503630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.503656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.503813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.503838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.503977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.504002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.504141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.504168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.504329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.504354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.504512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.504537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.504691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.504716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.504873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.504899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.505054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.505079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.505265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.505290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.505434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.505460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.505608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.505650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.505824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.505852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.506045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.506074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.506232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.506257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.506420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.506445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.506576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.506601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.506761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.506786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.506927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.506953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.507115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.507141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.507303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.507328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.507496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.507521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.507685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.507709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.507875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.507901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.508036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.508061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.508203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.508230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.508372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.508399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.508561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.508587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.873 qpair failed and we were unable to recover it. 00:33:46.873 [2024-07-13 15:45:17.508753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.873 [2024-07-13 15:45:17.508778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.508941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.508967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.509114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.509139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.509304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.509329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.509483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.509509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.509645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.509668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.509853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.509883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.510014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.510039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.510178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.510203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.510358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.510383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.510549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.510574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.510761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.510786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.510927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.510952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.511118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.511144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.511281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.511305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.511492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.511518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.511678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.511703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.511858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.511891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.512069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.512095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.512234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.512260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.512431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.512456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.512615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.512641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.512799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.512824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.512960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.512987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.513128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.513153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.513312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.513343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.513498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.513523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.513676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.513701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.513879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.513906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.514045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.514072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.514238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.514263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.514426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.514457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.514658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.514684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.514848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.514882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.515059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.515084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.515210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.515238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.515401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.515426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.515608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.515633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.515829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.515863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.516017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.516042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.516174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.516201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.516373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.516399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.516559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.516584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.516784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.516809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.517004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.517031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.517198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.517223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.517406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.517431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.517603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.517628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.517784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.517810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.517983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.518008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.518173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.518198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.518375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.518401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.518546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.518572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.518739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.518764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.518901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.518927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.519087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.519112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.519253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.519278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.519452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.519477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.519650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.519675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.519804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.519829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.519986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.520012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.520185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.520211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.520370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.520396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.520521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.520545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.520704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.520729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.520896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.520926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.874 qpair failed and we were unable to recover it. 00:33:46.874 [2024-07-13 15:45:17.521118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.874 [2024-07-13 15:45:17.521143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.521280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.521305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.521465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.521490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.521661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.521686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.521869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.521895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.522041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.522066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.522238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.522263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.522440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.522465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.522640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.522664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.522836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.522877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.523010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.523035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.523164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.523190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.523378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.523403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.523568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.523593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.523730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.523756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.523918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.523944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.524106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.524131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.524312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.524337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.524479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.524504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.524667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.524692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.524885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.524911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.525068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.525093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.525278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.525303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.525460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.525485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.525644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.525669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.525826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.525852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.526043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.526068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.526242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.526267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.526410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.526435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.526607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.526632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.526799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.526824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.526975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.527001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.527166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.527192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.527364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.527389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.527575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.527600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.527734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.527759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.527925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.527951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.528088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.528113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.528283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.528309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.528472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.528501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.528629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.528655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.528842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.528877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.529062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.529088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.529259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.529284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.529447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.529472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.529591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.529616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.529775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.529800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.529954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.529980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.530133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.530158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.530322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.530347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.530504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.530529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.530687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.530713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.530873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.530899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.531065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.531090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.531262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.531287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.531459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.875 [2024-07-13 15:45:17.531484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.875 qpair failed and we were unable to recover it. 00:33:46.875 [2024-07-13 15:45:17.531648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.531675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.531875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.531902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.532069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.532094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.532251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.532276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.532442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.532467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.532624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.532648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.532781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.532806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.532990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.533015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.533187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.533212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.533347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.533371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.533539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.533565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.533724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.533749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.533893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.533919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.534075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.534100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.534289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.534314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.534479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.534504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.534635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.534660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.534829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.534854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.535060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.535085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.535267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.535292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.535449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.535474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.535664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.535688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.535825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.535851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.536012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.536041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.536178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.536203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.536337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.536362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.536510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.536536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.536698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.536723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.536909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.536934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.537093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.537119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.537249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.537274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.537432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.537457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.537592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.537617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.537784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.537809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.537954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.537979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.538106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.538132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.538294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.538321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.538473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.538498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.538662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.538687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.538875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.538901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.539062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.539089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.539244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.539430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.539455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.539584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.539609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.539767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.539793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.539968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.539995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.540151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.540176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.540348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.540374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.540528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.540553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.540745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.540770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.540917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.540944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.541072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.541098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.541259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.541284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.541441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.541466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.541626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.541651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.541815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.541841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.542008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.542033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.542166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.542191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.542385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.542410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.542544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.542571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.542732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.542757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.542910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.542936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.876 qpair failed and we were unable to recover it. 00:33:46.876 [2024-07-13 15:45:17.543101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.876 [2024-07-13 15:45:17.543126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.543301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.543331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.543497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.543522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.543652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.543678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.543840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.543871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.544030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.544056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.544221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.544246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.544430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.544455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.544617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.544643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.544774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.544799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.544966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.544992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.545177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.545202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.545338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.545365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.545525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.545552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.545679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.545705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.545913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.545938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.546123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.546148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.546310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.546335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.546484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.546511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.546640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.546666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.546831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.546856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.547066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.547092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.547246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.547271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.547426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.547451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.547607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.547632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.547795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.547820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.547998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.548024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.548160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.548185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.548319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.548345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.548506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.548533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.548665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.548690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.548851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.548884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.549042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.549067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.549229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.549255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.549447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.549472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.549631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.549656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.549814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.549839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.550033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.550058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.550195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.550220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.550357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.550382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.550535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.550561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.550720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.550752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.550951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.550978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.551116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.551142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.551302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.551327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.551484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.551509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.551679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.551704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.551925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.551951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.552139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.552164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.552351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.552376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.552538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.552563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.552727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.552752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.552885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.552912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.553073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.553099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.553262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.553288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.553458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.553484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.553675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.553700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.553823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.553849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.553980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.877 [2024-07-13 15:45:17.554004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.877 qpair failed and we were unable to recover it. 00:33:46.877 [2024-07-13 15:45:17.554169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.554194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.554352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.554539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.554564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.554722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.554747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.554873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.554899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.555083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.555108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.555300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.555325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.555482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.555511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.555715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.555740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.555911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.555937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.556096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.556124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.556285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.556310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.556468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.556493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.556630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.556655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.556812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.556838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.556999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.557025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.557193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.557218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.557414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.557439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.557625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.557651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.557805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.557830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.557975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.558001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.558163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.558188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.558352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.558382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.558545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.558571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.558711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.558736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.558928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.558954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.559116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.559142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.559310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.559336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.559485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.559510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.559645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.559670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.559843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.559875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.560035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.560060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.560191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.560216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.560378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.560404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.560595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.560621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.560758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.560784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.560923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.560949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.561108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.561134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.561266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.561292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.561442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.561467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.561598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.561623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.561782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.561807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.561973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.561999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.562135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.562160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.562318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.562343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.562529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.562715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.562741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.562924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.562951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.563085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.563111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.563287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.563314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.563446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.563471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.563667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.563693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.563888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.563914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.564102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.564127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.564298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.564323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.564479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.564504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.564643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.564669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.564856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.878 [2024-07-13 15:45:17.564886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.878 qpair failed and we were unable to recover it. 00:33:46.878 [2024-07-13 15:45:17.565060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.565085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.565210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.565235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.565401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.565426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.565549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.565573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.565725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.565759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.565932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.566095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.566121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.566257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.566283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.566470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.566496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.566640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.566665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.566793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.566818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.566986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.567013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.567180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.567206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.567344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.567370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.567540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.567565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.567727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.567752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.567937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.567962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.568122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.568147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.568316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.568341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.568524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.568550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.568709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.568734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.568900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.568926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.569101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.569126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.569286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.569311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.569496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.569522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.569664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.569691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.569876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.569902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.570063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.570089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.570270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.570296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.570452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.570478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.570634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.570659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.570802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.570828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.571006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.571032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.571197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.571222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.571409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.571435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.571566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.571591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.571750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.571775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.571901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.571927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.572090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.572115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.572252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.572277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.572404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.572429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.572566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.572591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.572755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.572780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.572935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.572961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.573089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.573118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.573332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.573360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.573531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.573557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.573688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.573713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.573906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.573935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.574108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.574136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.574322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.574347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.574476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.574501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.574716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.574743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.574919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.574948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.575125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.575150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.575340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.575366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.575552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.575577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.575732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.575757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.575944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.575970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.576156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.576181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.576348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.576374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.576529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.576554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.879 qpair failed and we were unable to recover it. 00:33:46.879 [2024-07-13 15:45:17.576711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.879 [2024-07-13 15:45:17.576736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.576878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.576903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.577048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.577074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.577261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.577287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.577476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.577501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.577656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.577681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.577850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.577880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.578025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.578050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.578230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.578255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.578467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.578492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.578623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.578666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.578850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.578886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.579082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.579107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.579243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.579268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.579440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.579465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.579629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.579654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.579832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.579860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.580050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.580075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.580241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.580266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.580397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.580423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.580611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.580636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.580804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.580829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.580997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.581027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.581160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.581186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.581322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.581349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.581535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.581560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.581742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.581768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.581932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.581957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.582123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.582149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.582309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.582334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.582492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.582519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.582712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.582737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.582903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.582929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.583111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.583136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.583323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.583348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.583488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.583513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.583711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.583737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.583898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.583923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.584111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.584136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.584275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.584310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.584475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.584500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.584633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.584658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.584821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.584846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.585010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.585038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.585192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.585217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.585371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.585396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.585555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.585580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.585768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.585794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.585936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.585961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.586096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.586127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.586289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.586314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.586478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.586503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.586628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.586653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.586814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.586839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.586994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.587020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.587184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.587209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.880 [2024-07-13 15:45:17.587370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.880 [2024-07-13 15:45:17.587395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.880 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.587526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.587551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.587702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.587727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.587916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.587942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.588098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.588123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.588322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.588348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.588503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.588534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.588663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.588688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.588871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.588914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.589072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.589098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.589276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.589304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.589507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.589535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.589701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.589725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.589925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.589954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.590136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.590162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.590313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.590338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.590474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.590500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.590653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.590678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.590863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.590896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.591070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.591099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.591300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.591325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.591500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.591530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.591753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.591779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.591940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.591966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.592153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.592178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.592391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.592419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.592605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.592631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.592808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.592836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.593026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.593053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.593237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.593266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.593444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.593472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.593656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.593683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.593847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.593877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.594043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.594069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.594280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.594308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.594484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.594512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.594691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.594716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.594890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.594915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.595120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.595148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.595350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.595378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.595541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.595566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.595726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.595754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.595950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.595977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.596121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.596147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.596311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.596336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.596468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.596493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.596696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.596723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.596904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.596933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.597089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.597114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.597292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.597320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.597500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.597529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.597744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.597772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.597954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.597979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.598199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.598227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.598375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.598403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.598582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.598610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.598798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.598823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.599033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.599062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.599267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.599295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.599476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.599504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.599686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.599712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.599910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.599939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.600111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.600140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.881 qpair failed and we were unable to recover it. 00:33:46.881 [2024-07-13 15:45:17.600315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.881 [2024-07-13 15:45:17.600343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.600537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.600562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.600755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.600782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.600986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.601015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.601180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.601205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.601371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.601396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.601582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.601610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.601786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.601814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.601987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.602016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.602201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.602226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.602373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.602405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.602613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.602638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.602797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.602823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.603036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.603062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.603276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.603304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.603482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.603510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.603683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.603711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.603887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.603930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.604094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.604119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.604311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.604339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.604541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.604568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.604725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.604750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.604901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.604926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.605059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.605084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.605261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.605302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.605488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.605513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.605673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.605698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.605902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.605931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.606109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.606137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.606322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.606347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.606531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.606559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.606736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.606760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.606942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.606968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.607155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.607180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.607335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.607363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:46.882 [2024-07-13 15:45:17.607565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.882 [2024-07-13 15:45:17.607593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:46.882 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.607769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.607797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.607962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.607988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.608145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.608193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.608338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.608367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.608521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.608550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.608759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.608784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.608925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.608964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.609160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.609188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.609393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.609419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.609601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.609627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.609801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.609828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.610009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.610037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.610212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.610241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.610451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.610476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.610654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.610686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.610902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.610931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.611132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.611160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.611303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.611328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.611508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.162 [2024-07-13 15:45:17.611536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.162 qpair failed and we were unable to recover it. 00:33:47.162 [2024-07-13 15:45:17.611742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.611770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.611972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.612001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.612210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.612235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.612382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.612410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.612623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.612648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.612806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.612831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.612957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.612982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.613155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.613183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.613397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.613422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.613605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.613633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.613802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.613830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.614045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.614071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.614237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.614262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.614468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.614496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.614656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.614681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.614858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.614904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.615082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.615112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.615311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.615339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.615529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.615554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.615764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.615793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.615996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.616190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.616218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.616405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.616432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.616596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.616621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.616832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.616860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.617051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.617076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.617239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.617267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.617415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.617443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.617612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.617640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.617847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.617877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.618011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.618038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.618201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.618227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.618401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.618429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.618595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.618623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.618827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.618852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.619037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.619070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.619250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.619278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.619458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.619485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.619679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.619704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.619885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.163 [2024-07-13 15:45:17.619914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.163 qpair failed and we were unable to recover it. 00:33:47.163 [2024-07-13 15:45:17.620113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.620141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.620309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.620337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.620489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.620515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.620652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.620679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.620843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.620891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.621069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.621097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.621273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.621298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.621472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.621499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.621645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.621672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.621850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.621882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.622045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.622071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.622257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.622281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.622494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.622522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.622694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.622722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.622870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.622896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.623032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.623057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.623259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.623287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.623489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.623514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.623669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.623694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.623892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.623921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.624072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.624097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.624262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.624287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.624478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.624503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.624683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.624711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.624917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.624945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.625120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.625147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.625328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.625354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.625529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.625556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.625758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.625786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.625997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.626025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.626186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.626211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.626385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.626413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.626586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.626614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.626789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.626816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.627002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.627028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.627199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.627232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.627409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.627434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.627593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.627619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.627780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.627804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.627938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.627973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.164 qpair failed and we were unable to recover it. 00:33:47.164 [2024-07-13 15:45:17.628113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.164 [2024-07-13 15:45:17.628139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.628338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.628364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.628505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.628530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.628732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.628760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.628972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.628998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.629173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.629201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.629414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.629439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.629612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.629637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.629812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.629840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.630006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.630035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.630226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.630252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.630436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.630464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.630640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.630667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.630852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.630889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.631054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.631079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.631255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.631283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.631456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.631485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.631667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.631695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.631858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.631890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.632070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.632099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.632266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.632294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.632466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.632496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.632652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.632678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.632855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.632889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.633067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.633095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.633245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.633273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.633454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.633479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.633643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.633686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.633870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.633899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.634053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.634081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.634258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.634455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.634483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.634661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.634688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.634863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.634908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.635121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.635146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.635356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.635388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.635593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.635621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.635799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.635826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.636013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.636039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.636216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.636245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.636425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.165 [2024-07-13 15:45:17.636450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.165 qpair failed and we were unable to recover it. 00:33:47.165 [2024-07-13 15:45:17.636611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.636653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.636842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.636872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.637013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.637037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.637243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.637271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.637460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.637488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.637664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.637689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.637895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.637923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.638069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.638096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.638283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.638308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.638473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.638498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.638704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.638732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.638941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.638968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.639137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.639180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.639327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.639353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.639530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.639559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.639758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.639786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.639984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.640013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.640205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.640230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.640386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.640411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.640598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.640626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.640766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.640794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.641017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.641043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.641172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.641198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.641398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.641427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.641577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.641606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.641789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.641814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.641995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.642023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.642227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.642255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.642405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.642432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.642620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.642646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.642855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.642900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.643092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.643118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.643326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.166 [2024-07-13 15:45:17.643353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.166 qpair failed and we were unable to recover it. 00:33:47.166 [2024-07-13 15:45:17.643535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.643560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.643726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.643756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.643953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.643979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.644143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.644171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.644327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.644352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.644531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.644559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.644771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.644796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.644980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.645006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.645203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.645228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.645384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.645410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.645579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.645607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.645772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.645801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.645956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.645982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.646147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.646171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.646332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.646358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.646538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.646567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.646767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.646795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.646999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.647025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.647204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.647234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.647405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.647433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.647612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.647637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.647819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.647847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.648054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.648081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.648264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.648289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.648425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.648450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.648630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.648658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.648872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.648898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.649026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.649051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.649218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.649243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.649378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.649422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.649565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.649594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.649774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.649804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.650022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.650048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.650220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.650249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.650390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.650422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.650623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.650651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.650805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.650830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.651016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.651045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.651198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.651226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.651421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.167 [2024-07-13 15:45:17.651449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.167 qpair failed and we were unable to recover it. 00:33:47.167 [2024-07-13 15:45:17.651626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.651651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.651832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.651870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.652042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.652070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.652214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.652242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.652389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.652414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.652593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.652621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.652756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.652784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.652954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.652982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.653133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.653157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.653315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.653340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.653530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.653558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.653768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.653796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.654009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.654034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.654194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.654222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.654397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.654426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.654602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.654630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.654779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.654804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.654959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.654984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.655145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.655187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.655388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.655416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.655565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.655590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.655768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.655797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.655956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.655982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.656187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.656215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.656422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.656447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.656601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.656629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.656827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.656856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.657069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.657095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.657263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.657288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.657467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.657495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.657669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.657697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.657901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.657927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.658084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.658109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.658266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.658291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.658469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.658497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.658669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.658697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.658887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.658912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.659086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.659114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.659326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.659377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.659593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.659621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.168 [2024-07-13 15:45:17.659808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.168 [2024-07-13 15:45:17.659833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.168 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.660058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.660092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.660323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.660371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.660553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.660581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.660788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.660814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.660995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.661024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.661229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.661257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.661443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.661470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.661654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.661679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.661888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.661916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.662082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.662110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.662293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.662319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.662474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.662499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.662675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.662703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.662884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.662912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.663127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.663152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.663340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.663365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.663500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.663526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.663709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.663737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.663884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.663913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.664090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.664115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.664296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.664325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.664508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.664535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.664695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.664738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.664955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.664981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.665142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.665185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.665353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.665381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.665555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.665582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.665763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.665788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.665956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.665981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.666144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.666171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.666334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.666359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.666518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.666545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.666722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.666750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.666927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.666956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.667101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.667129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.667311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.667336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.667541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.667569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.667702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.667730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.667913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.667943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.668125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.169 [2024-07-13 15:45:17.668151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.169 qpair failed and we were unable to recover it. 00:33:47.169 [2024-07-13 15:45:17.668290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.668320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.668451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.668476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.668675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.668702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.668909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.668935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.669136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.669164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.669340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.669368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.669569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.669596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.669787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.669811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.669971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.669997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.670180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.670208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.670386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.670414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.670598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.670623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.670791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.670818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.671008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.671037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.671221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.671249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.671411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.671437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.671612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.671640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.671812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.671840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.672025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.672051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.672214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.672241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.672424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.672452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.672631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.672662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.672871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.672900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.673095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.673120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.673280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.673305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.673457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.673499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.673674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.673701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.673878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.673921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.674081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.674106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.674320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.674347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.674524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.674553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.674732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.674758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.674911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.674940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.675085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.675115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.675319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.675347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.675519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.675544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.675678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.675703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.675906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.675935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.676104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.676132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.676281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.676306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.170 [2024-07-13 15:45:17.676472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.170 [2024-07-13 15:45:17.676501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.170 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.676665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.676690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.676875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.676900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.677063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.677088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.677263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.677291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.677461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.677489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.677667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.677695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.677884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.677909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.678053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.678078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.678237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.678267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.678413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.678441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.678650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.678674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.678841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.678874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.679078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.679106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.679309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.679337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.679522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.679547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.679727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.679757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.679936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.679965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.680146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.680173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.680352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.680378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.680560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.680588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.680733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.680761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.680941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.680969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.681129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.681154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.681314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.681339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.681524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.681550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.681717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.681744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.681901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.681928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.682137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.682165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.682372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.682396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.682522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.682548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.682732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.682757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.682958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.682987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.683167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.171 [2024-07-13 15:45:17.683194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.171 qpair failed and we were unable to recover it. 00:33:47.171 [2024-07-13 15:45:17.683406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.683431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.683562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.683588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.683741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.683768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.683954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.683980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.684166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.684191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.684428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.684453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.684635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.684667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.684843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.684877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.685060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.685088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.685271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.685296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.685476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.685504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.685705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.685734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.685875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.685903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.686111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.686136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.686353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.686381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.686525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.686553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.686729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.686759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.686949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.686975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.687156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.687183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.687361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.687389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.687559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.687587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.687789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.687814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.687994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.688022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.688225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.688253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.688464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.688489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.688652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.688678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.688841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.688890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.689068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.689096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.689280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.689308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.689498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.689523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.689702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.689730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.689909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.689939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.690082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.690110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.690272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.690298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.690460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.690486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.690663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.690691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.690843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.690887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.691062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.691087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.691295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.691323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.691534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.691562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.172 [2024-07-13 15:45:17.691739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.172 [2024-07-13 15:45:17.691767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.172 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.691943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.691969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.692138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.692163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.692373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.692400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.692609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.692637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.692845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.692875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.693009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.693038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.693240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.693268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.693440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.693467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.693615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.693641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.693848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.693881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.694089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.694114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.694319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.694347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.694532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.694557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.694734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.694762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.694963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.694992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.695192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.695221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.695406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.695431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.695616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.695641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.695818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.695846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.696024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.696052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.696236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.696261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.696423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.696448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.696650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.696678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.696855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.696890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.697101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.697126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.697307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.697334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.697540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.697568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.697716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.697744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.697917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.697942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.698126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.698155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.698294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.698322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.698502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.698530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.698689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.698714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.698920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.698948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.699154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.699182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.699384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.699409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.699577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.699603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.699789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.699817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.700027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.700056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.700240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.173 [2024-07-13 15:45:17.700265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.173 qpair failed and we were unable to recover it. 00:33:47.173 [2024-07-13 15:45:17.700450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.700476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.700648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.700676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.700851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.700884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.701091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.701118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.701301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.701327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.701496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.701525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.701730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.701758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.701935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.701963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.702132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.702157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.702365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.702393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.702600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.702628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.702830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.702858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.703052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.703077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.703251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.703279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.703456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.703694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.703719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.703941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.703967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.704155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.704180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.704313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.704339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.704504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.704547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.704756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.704781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.704964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.704992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.705201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.705229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.705403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.705431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.705615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.705641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.705797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.705823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.706004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.706032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.706230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.706258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.706408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.706434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.706611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.706639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.706809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.706837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.707008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.707036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.707219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.707249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.707416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.707441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.707579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.707604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.707788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.707816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.707962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.707988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.708149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.708193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.708378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.708404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.708538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.708579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.174 qpair failed and we were unable to recover it. 00:33:47.174 [2024-07-13 15:45:17.708773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.174 [2024-07-13 15:45:17.708799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.708983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.709011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.709194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.709219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.709351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.709394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.709612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.709638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.709798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.709824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.710015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.710044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.710221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.710249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.710398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.710423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.710579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.710623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.710823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.710851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.711016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.711044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.711260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.711285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.711472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.711500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.711642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.711672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.711849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.711887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.712055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.712080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.712267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.712292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.712419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.712445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.712576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.712601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.712783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.712811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.712963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.712989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.713147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.713171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.713390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.713418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.713599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.713624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.713761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.713785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.713992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.714021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.714205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.714232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.714392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.714419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.714577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.714602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.714765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.714790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.714990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.715019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.715170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.715199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.715359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.715384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.715594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.715619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.715773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.175 [2024-07-13 15:45:17.715798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.175 qpair failed and we were unable to recover it. 00:33:47.175 [2024-07-13 15:45:17.715959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.715986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.716143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.716168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.716303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.716329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.716515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.716541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.716759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.716786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.716970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.716997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.717147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.717172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.717376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.717404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.717609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.717635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.717779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.717807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.717948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.717977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.718157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.718186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.718370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.718396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.718606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.718633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.718818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.718846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.719008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.719035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.719211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.719236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.719369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.719394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.719530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.719556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.719746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.719772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.719957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.719982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.720117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.720142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.720286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.720311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.720456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.720482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.720617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.720643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.720800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.720842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.721020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.721046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.721175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.721200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.721330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.721356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.721488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.721514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.721646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.721674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.721808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.721834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.722027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.722053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.722219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.722244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.722377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.722404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.722559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.722584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.722750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.722799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.722977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.723010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.723196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.723221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.723380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.176 [2024-07-13 15:45:17.723405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.176 qpair failed and we were unable to recover it. 00:33:47.176 [2024-07-13 15:45:17.723557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.723582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.723745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.723770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.723938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.723965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.724127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.724154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.724348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.724374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.724539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.724566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.724724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.724749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.724882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.724908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.725062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.725087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.725248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.725274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.725437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.725463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.725649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.725674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.725808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.725833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.725968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.725993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.726163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.726188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.726350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.726375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.726516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.726541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.726699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.726724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.726887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.726913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.727109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.727137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.727336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.727362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.727516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.727541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.727726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.727752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.727893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.727919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.728060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.728086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.728226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.728252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.728396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.728421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.728555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.728580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.728707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.728732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.728922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.728948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.729109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.729135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.729321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.729346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.729503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.729529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.729686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.729711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.729874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.729900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.730062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.730087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.730220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.730249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.730388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.730413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.730537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.730562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.730719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.730745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.177 qpair failed and we were unable to recover it. 00:33:47.177 [2024-07-13 15:45:17.730898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.177 [2024-07-13 15:45:17.730939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.731130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.731156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.731317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.731342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.731499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.731524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.731690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.731715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.731880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.731914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.732075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.732100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.732240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.732265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.732451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.732476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.732637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.732664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.732861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.732896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.733035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.733061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.733197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.733223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.733378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.733403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.733560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.733586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.733743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.733768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.733906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.733932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.734107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.734132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.734267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.734292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.734449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.734474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.734640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.734666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.734831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.734857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.735035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.735061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.735219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.735245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.735429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.735454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.735615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.735640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.735806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.735831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.735970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.735996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.736158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.736183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.736336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.736361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.736548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.736573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.736745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.736770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.736934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.736959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.737119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.737144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.737316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.737341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.737513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.737538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.737701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.737729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.737860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.737890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.738054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.738082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.738270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.738298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.178 [2024-07-13 15:45:17.738447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.178 [2024-07-13 15:45:17.738471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.178 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.738660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.738685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.738844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.738881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.739041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.739067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.739224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.739250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.739381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.739406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.739599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.739627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.739769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.739797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.739979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.740005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.740168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.740194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.740359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.740384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.740555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.740580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.740746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.740771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.740960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.740988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.741146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.741172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.741331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.741356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.741488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.741514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.741655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.741680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.741842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.741871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.742046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.742072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.742265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.742290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.742487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.742513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.742723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.742750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.742929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.742958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.743138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.743163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.743308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.743333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.743489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.743514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.743699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.743724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.743878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.743903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.744106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.744131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.744314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.744339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.744523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.744548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.744705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.744733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.744918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.744945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.745145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.745171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.179 [2024-07-13 15:45:17.745339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.179 [2024-07-13 15:45:17.745364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.179 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.745522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.745551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.745681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.745706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.745862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.745894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.746058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.746084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.746250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.746274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.746425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.746450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.746640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.746665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.746824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.746850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.747048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.747074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.747212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.747237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.747389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.747414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.747565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.747591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.747760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.747785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.747942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.747969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.748155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.748183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.748357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.748385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.748541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.748566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.748696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.748721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.748909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.748935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.749119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.749144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.749310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.749335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.749498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.749523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.749648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.749673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.749835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.749860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.750028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.750054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.750182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.750207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.750335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.750361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.750551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.750577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.750740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.750765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.750930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.750956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.751119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.751145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.751305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.751331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.751487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.751512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.751681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.751706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.751841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.751872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.752063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.752092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.752251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.752276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.752406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.752433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.752588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.752613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.752775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.180 [2024-07-13 15:45:17.752800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.180 qpair failed and we were unable to recover it. 00:33:47.180 [2024-07-13 15:45:17.752934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.752964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.753125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.753150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.753278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.753303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.753465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.753490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.753675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.753699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.753869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.753894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.754031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.754057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.754219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.754244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.754372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.754397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.754523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.754548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.754708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.754734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.754901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.754927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.755053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.755078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.755274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.755299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.755457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.755483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.755608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.755634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.755776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.755802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.755968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.755995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.756202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.756230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.756389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.756416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.756575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.756600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.756731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.756756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.756927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.756953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.757086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.757112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.757280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.757305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.757468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.757493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.757623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.757647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.757810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.757835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.758026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.758051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.758210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.758236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.758420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.758445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.758576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.758603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.758767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.758955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.758981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.759142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.759167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.759333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.759358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.759516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.759540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.759702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.759729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.759894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.759938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.760094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.181 [2024-07-13 15:45:17.760121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.181 qpair failed and we were unable to recover it. 00:33:47.181 [2024-07-13 15:45:17.760277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.760306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.760465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.760491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.760650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.760676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.760841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.760871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.761055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.761081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.761244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.761270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.761426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.761452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.761612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.761639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.761798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.761824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.761996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.762023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.762193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.762218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.762393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.762418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.762603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.762628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.762754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.762779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.762938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.762965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.763134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.763159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.763294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.763319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.763474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.763499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.763657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.763682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.763847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.763878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.764066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.764091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.764221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.764246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.764373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.764400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.764533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.764558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.764720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.764745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.764875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.764901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.765066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.765091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.765252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.765277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.765408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.765433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.765596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.765622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.765776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.765801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.765940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.765965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.766102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.766128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.766293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.766318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.766453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.766479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.766614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.766639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.766799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.766825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.766987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.767014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.767174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.767200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.767324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.767349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.182 qpair failed and we were unable to recover it. 00:33:47.182 [2024-07-13 15:45:17.767506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.182 [2024-07-13 15:45:17.767536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.767722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.767747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.767876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.767902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.768092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.768122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.768297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.768322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.768483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.768508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.768642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.768667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.768832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.768857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.769042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.769067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.769214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.769240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.769426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.769451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.769587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.769612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.769747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.769773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.769909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.769935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.770102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.770127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.770284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.770309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.770466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.770492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.770649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.770673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.770833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.770858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.771034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.771060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.771241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.771266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.771423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.771448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.771633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.771659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.771798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.771824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.771956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.771982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.772171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.772196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.772321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.772348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.772519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.772544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.772687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.772712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.772840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.772870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.773049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.773075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.773235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.773260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.773387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.773413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.773586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.773612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.773770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.773795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.773922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.773948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.774088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.774115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.774249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.774274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.774393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.774418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.774553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.774578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.183 [2024-07-13 15:45:17.774748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.183 [2024-07-13 15:45:17.774777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.183 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.774973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.774999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.775163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.775188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.775376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.775401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.775530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.775555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.775689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.775714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.775848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.775878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.776068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.776093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.776272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.776297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.776483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.776508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.776663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.776688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.776876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.776904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.777057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.777083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.777228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.777269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.777481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.777510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.777708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.777736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.777953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.777978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.778144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.778169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.778375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.778403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.778578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.778606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.778800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.778828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.779010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.779036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.779217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.779245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.779413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.779441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.779598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.779623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.779828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.779855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.780046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.780074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.780227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.780255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.780405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.780431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.780643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.780671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.780879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.780908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.781086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.781113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.781295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.781320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.781495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.781523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.781685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.781712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.781885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.184 [2024-07-13 15:45:17.781930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.184 qpair failed and we were unable to recover it. 00:33:47.184 [2024-07-13 15:45:17.782105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.782130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.782304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.782333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.782508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.782536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.782734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.782762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.782909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.782939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.783069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.783095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.783305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.783333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.783513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.783542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.783709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.783734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.783875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.783901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.784058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.784084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.784266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.784294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.784456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.784484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.784639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.784664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.784877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.784905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.785051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.785080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.785240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.785266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.785449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.785477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.785671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.785697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.785824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.785849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.786009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.786036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.786192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.786218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.786420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.786448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.786644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.786673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.786879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.786925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.787094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.787120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.787244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.787269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.787429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.787458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.787638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.787663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.787839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.787872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.788073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.788101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.788293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.788321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.788505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.788530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.788666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.788691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.788893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.788921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.789095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.789122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.789302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.789327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.789506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.789534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.789736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.789764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.789936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.789965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.185 [2024-07-13 15:45:17.790110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.185 [2024-07-13 15:45:17.790135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.185 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.790269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.790295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.790457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.790483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.790665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.790693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.790901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.790931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.791114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.791142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.791311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.791338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.791515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.791543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.791702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.791727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.791893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.791918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.792080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.792105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.792269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.792312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.792494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.792519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.792693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.792721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.792929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.792957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.793148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.793173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.793330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.793355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.793563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.793590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.793742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.793771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.793984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.794012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.794190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.794215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.794376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.794402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.794613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.794642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.794848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.794881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.795091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.795117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.795323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.795351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.795551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.795578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.795747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.795774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.795956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.795981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.796142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.796183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.796396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.796586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.796611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.796766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.796791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.796999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.797028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.797226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.797254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.797452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.797480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.797628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.797653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.797809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.797851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.798045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.798070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.798232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.798257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.798416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.186 [2024-07-13 15:45:17.798441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.186 qpair failed and we were unable to recover it. 00:33:47.186 [2024-07-13 15:45:17.798622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.798649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.798819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.798847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.799051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.799079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.799286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.799315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.799453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.799478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.799637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.799662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.799844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.799879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.800034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.800059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.800216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.800241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.800426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.800454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.800604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.800631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.800838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.800863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.801036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.801064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.801232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.801259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.801457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.801484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.801658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.801683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.801862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.801908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.802115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.802143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.802320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.802348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.802530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.802555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.802683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.802709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.802911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.802940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.803072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.803100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.803254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.803279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.803469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.803525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.803666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.803695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.803900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.803929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.804106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.804132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.804315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.804345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.804548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.804576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.804752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.804784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.804952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.804977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.805119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.805145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.805352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.805380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.805556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.805583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.805725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.805752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.805956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.805985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.806182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.806207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.806370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.806395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.806551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.806576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.187 [2024-07-13 15:45:17.806729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.187 [2024-07-13 15:45:17.806757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.187 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.806960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.806986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.807162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.807190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.807367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.807392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.807578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.807607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.807753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.807782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.807974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.808000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.808133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.808158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.808349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.808374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.808614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.808640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.808781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.808806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.808991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.809017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.809150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.809176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.809339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.809364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.809524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.809550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.809730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.809760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.809918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.809947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.810126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.810154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.810311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.810336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.810497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.810522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.810718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.810743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.810913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.810940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.811181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.811205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.811388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.811415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.811616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.811642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.811844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.811879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.812092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.812118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.812320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.812345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.812507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.812547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.812718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.812746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.812929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.812960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.813121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.813148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.813358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.813386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.813537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.813565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.813729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.188 [2024-07-13 15:45:17.813757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.188 qpair failed and we were unable to recover it. 00:33:47.188 [2024-07-13 15:45:17.813947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.813973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.814139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.814165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.814299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.814325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.814460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.814485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.814663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.814692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.814872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.814901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.815079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.815107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.815251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.815277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.815458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.815486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.815655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.815683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.815888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.815917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.816091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.816117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.816248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.816288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.816462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.816489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.816698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.816726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.816883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.816909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.817050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.817075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.817245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.817270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.817431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.817457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.817610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.817636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.817772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.817815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.818024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.818053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.818236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.818264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.818444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.818468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.818600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.818626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.818810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.818835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.819036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.819065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.819244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.819269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.819411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.819436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.819601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.819643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.819818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.819846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.820033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.820058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.820241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.820271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.820447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.820475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.820687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.820715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.820928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.820958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.821100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.821125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.821317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.821345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.821522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.821549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.821738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.821763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.189 qpair failed and we were unable to recover it. 00:33:47.189 [2024-07-13 15:45:17.821964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.189 [2024-07-13 15:45:17.821993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.822176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.822201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.822359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.822384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.822538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.822563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.822743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.822773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.822983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.823011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.823223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.823252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.823410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.823436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.823591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.823631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.823841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.823873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.824043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.824071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.824250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.824275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.824435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.824462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.824639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.824667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.824845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.824875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.825007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.825033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.825205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.825231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.825443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.825471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.825672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.825700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.825889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.825915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.826096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.826124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.826301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.826329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.826504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.826533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.826680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.826705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.826907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.826935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.827102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.827130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.827346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.827371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.827532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.827557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.827763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.827790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.827975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.828002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.828182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.828211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.828390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.828416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.828552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.828578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.828741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.828766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.828946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.829149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.829180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.829391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.829419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.829591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.829618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.829796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.829824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.830038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.830063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.190 [2024-07-13 15:45:17.830271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.190 [2024-07-13 15:45:17.830296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.190 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.830497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.830525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.830698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.830726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.830933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.830959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.831095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.831120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.831281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.831307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.831477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.831505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.831672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.831700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.831871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.831915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.832123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.832166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.832316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.832345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.832526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.832552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.832708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.832910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.832936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.833099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.833124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.833311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.833336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.833497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.833522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.833709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.833734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.833894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.833920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.834084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.834109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.834274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.834299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.834453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.834478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.834647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.834672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.834803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.834829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.834975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.835001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.835160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.835186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.835373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.835398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.835583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.835738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.835764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.835898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.835932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.836130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.836155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.836318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.836343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.836491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.836517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.836705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.836730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.836869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.836896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.837028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.837057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.837190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.837216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.837347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.837373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.837536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.837561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.837731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.837756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.837911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.191 [2024-07-13 15:45:17.837937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.191 qpair failed and we were unable to recover it. 00:33:47.191 [2024-07-13 15:45:17.838077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.838102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.838287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.838312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.838444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.838469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.838656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.838681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.838841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.838870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.839031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.839056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.839212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.839237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.839432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.839457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.839627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.839652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.839815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.839840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.840000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.840025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.840189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.840214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.840351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.840376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.840537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.840562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.840775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.840802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.841009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.841035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.841159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.841184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.841336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.841361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.841548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.841573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.841708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.841733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.841876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.841903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.842075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.842100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.842290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.842315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.842487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.842512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.842673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.842698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.842894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.842919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.843076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.843101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.843285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.843310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.843471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.843497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.843660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.843686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.843838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.843863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.844028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.844054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.844202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.844228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.844411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.844436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.844597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.192 [2024-07-13 15:45:17.844626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.192 qpair failed and we were unable to recover it. 00:33:47.192 [2024-07-13 15:45:17.844762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.844787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.844946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.844972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.845112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.845139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.845328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.845353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.845491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.845516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.845674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.845699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.845882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.845924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.846084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.846110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.846296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.846321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.846516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.846541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.846699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.846725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.846923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.846949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.847127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.847151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.847310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.847335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.847494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.847520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.847656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.847681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.847830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.847859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.848070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.848096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.848280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.848305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.848463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.848488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.848673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.848698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.848888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.848914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.849073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.849099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.849260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.849284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.849445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.849470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.849642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.849667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.849833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.849858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.850051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.850077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.850232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.850258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.850420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.850445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.850606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.850632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.850813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.850843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.851043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.851069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.851230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.851256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.851426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.851451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.851654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.851682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.851881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.851910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.852108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.852133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.852297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.852322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.852455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.852486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.193 [2024-07-13 15:45:17.852645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.193 [2024-07-13 15:45:17.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.193 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.852828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.852854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.853056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.853081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.853211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.853236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.853396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.853422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.853555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.853581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.853743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.853768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.853933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.853960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.854126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.854151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.854315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.854340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.854502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.854528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.854665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.854692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.854850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.854881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.855050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.855075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.855261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.855286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.855452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.855477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.855634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.855659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.855814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.855840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.855976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.856003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.856168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.856193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.856319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.856345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.856506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.856530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.856692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.856717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.856843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.857014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.857039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.857174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.857199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.857359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.857385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.857571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.857596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.857801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.857829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.858016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.858044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.858228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.858254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.858416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.858442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.858621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.858650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.858818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.858846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.859074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.859102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.859280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.859305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.859512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.859540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.859689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.859717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.859919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.859947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.860135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.860164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.860330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.194 [2024-07-13 15:45:17.860356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.194 qpair failed and we were unable to recover it. 00:33:47.194 [2024-07-13 15:45:17.860541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.860566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.860767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.860795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.861002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.861028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.861173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.861202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.861382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.861410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.861567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.861592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.861774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.861799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.861978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.862006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.862182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.862210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.862410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.862438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.862617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.862643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.862827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.862856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.863084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.863113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.863326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.863354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.863509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.863535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.863747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.863775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.863921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.863950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.864131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.864157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.864316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.864342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.864525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.864550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.864734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.864762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.864943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.864968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.865129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.865154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.865294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.865322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.865530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.865558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.865764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.865792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.865946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.865971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.866100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.866141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.866344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.866372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.866508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.866535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.866740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.866765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.866949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.866977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.867147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.867175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.867346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.867373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.867559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.867584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.867766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.867791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.868003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.868031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.868207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.868234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.868439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.868471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.868683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.868711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.195 [2024-07-13 15:45:17.868872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.195 [2024-07-13 15:45:17.868899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.195 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.869055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.869095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.869279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.869304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.869464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.869507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.869705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.869733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.869910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.869939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.870123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.870148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.870329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.870357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.870558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.870585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.870787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.870815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.871021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.871047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.871249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.871277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.871483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.871511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.871712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.871739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.871943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.871969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.872130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.872156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.872362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.872390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.872603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.872628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.872757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.872783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.872939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.872965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.873128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.873153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.873314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.873341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.873523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.873550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.873718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.873743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.873875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.873900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.874112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.874141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.874324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.874350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.874475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.874500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.874673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.874701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.874900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.874928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.875134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.875159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.875332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.875360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.875565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.875593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.875771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.875798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.875973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.875999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.876137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.876161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.876318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.876344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.876532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.876560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.876748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.876777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.196 qpair failed and we were unable to recover it. 00:33:47.196 [2024-07-13 15:45:17.876912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.196 [2024-07-13 15:45:17.876955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.877141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.877167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.877382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.877410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.877566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.877591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.877774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.877802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.877999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.878027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.878197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.878225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.878410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.878435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.878611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.878639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.878786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.878814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.878995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.879021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.879183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.879210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.879397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.879425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.879596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.879624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.879796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.879824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.880034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.880060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.880210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.880237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.880416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.880444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.880646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.880674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.880828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.880853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.881028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.881056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.881271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.881296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.881474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.881502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.881706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.881731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.881894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.881920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.882099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.882126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.882307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.882335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.882481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.882506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.882635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.882660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.882792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.882817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.883002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.883045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.883232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.883257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.883401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.883430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.883617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.883644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.883854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.883885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.884051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.884076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.197 qpair failed and we were unable to recover it. 00:33:47.197 [2024-07-13 15:45:17.884257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.197 [2024-07-13 15:45:17.884285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.884451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.884478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.884679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.884706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.884893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.884922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.885096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.885124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.885325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.885352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.885503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.885531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.885737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.885762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.885909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.885938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.886107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.886135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.886318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.886344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.886533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.886558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.886743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.886770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.886971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.886999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.887208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.887236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.887448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.887473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.887654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.887681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.887913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.888124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.888166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.888342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.888367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.888505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.888531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.888691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.888732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.888885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.888913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.889094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.889120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.889257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.889283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.889425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.889466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.889643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.889671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.889850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.889882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.890094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.890122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.890318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.890346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.890552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.890580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.890791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.890816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.890976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.891006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.891184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.891213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.891416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.891444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.891626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.891651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.891808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.891838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.892034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.892060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.892248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.892273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.892476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.892502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.892682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.198 [2024-07-13 15:45:17.892711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.198 qpair failed and we were unable to recover it. 00:33:47.198 [2024-07-13 15:45:17.892929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.892954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.893118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.893143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.893302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.893331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.893455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.893497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.893668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.893696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.893913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.893939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.894124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.894150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.894321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.894349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.894550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.894578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.894717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.894745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.894929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.894955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.895165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.895193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.895394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.895421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.895622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.895650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.895838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.895863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.896053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.896081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.896292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.896320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.896495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.896523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.896728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.896753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.896894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.896919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.897081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.897127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.897346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.897371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.897539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.897564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.897717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.897747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.897938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.897963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.898100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.898125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.898288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.898313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.898446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.898471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.898630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.898672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.898825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.898854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.899051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.899078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.899294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.899323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.899496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.899524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.899703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.899732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.899913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.899939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.900071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.900097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.900262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.900287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.900424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.900451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.900607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.900633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.900803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.900828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.199 qpair failed and we were unable to recover it. 00:33:47.199 [2024-07-13 15:45:17.901019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.199 [2024-07-13 15:45:17.901047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.901246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.901274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.901453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.901484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.901681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.901709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.901896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.901925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.902126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.902154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.902338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.902363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.902545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.902572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.902756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.902781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.902945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.902971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.903136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.903161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.903342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.903372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.903511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.903539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.903718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.903746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.903941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.903967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.904100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.904126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.904298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.904323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.904533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.904561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.904735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.904763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.904947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.904973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.905182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.905209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.200 [2024-07-13 15:45:17.905410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.200 [2024-07-13 15:45:17.905439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.200 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.905596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.905621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.905779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.905822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.906033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.906059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.906242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.906270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.906451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.906478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.906618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.906643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.906827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.906856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.907025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.907059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.907252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.907277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.907458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.907486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.907636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.907664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.907838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.907874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.908081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.908106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.908268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.908294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.908438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.908484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.908696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.908723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.908901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.908928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.909109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.481 [2024-07-13 15:45:17.909138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.481 qpair failed and we were unable to recover it. 00:33:47.481 [2024-07-13 15:45:17.909316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.909344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.909498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.909527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.909691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.909716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.909916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.909942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.910099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.910126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.910301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.910329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.910486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.910513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.910672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.910697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.910897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.910926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.911103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.911131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.911324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.911349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.911504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.911532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.911719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.911744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.911885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.911930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.912136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.912161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.912348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.912377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.912585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.912613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.912796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.912821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.912952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.912979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.913107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.913133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.913280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.913320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.913498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.913526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.913726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.913754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.913906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.913932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.914090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.914115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.914314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.914343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.914496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.914522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.914741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.914769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.914978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.915004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.915181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.915215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.915393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.915418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.915590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.915619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.915792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.915820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.915992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.916017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.916172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.916197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.916356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.916398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.916572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.916602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.916754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.916782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.916961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.916987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.917153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.917178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.482 qpair failed and we were unable to recover it. 00:33:47.482 [2024-07-13 15:45:17.917393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.482 [2024-07-13 15:45:17.917422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.917595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.917623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.917828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.917853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.918013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.918041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.918218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.918246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.918449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.918478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.918633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.918658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.918876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.918905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.919118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.919143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.919356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.919383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.919568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.919593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.919809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.919837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.920028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.920053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.920212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.920238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.920426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.920451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.920588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.920614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.920781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.920806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.921002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.921027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.921162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.921187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.921345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.921370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.921550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.921578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.921751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.921780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.921983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.922009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.922187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.922215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.922357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.922385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.922558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.922587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.922748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.922773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.922894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.922920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.923108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.923136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.923277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.923310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.923485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.923510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.923659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.923688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.923859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.923893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.924071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.924099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.924281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.924307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.924517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.924546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.924715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.924743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.924960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.924986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.925128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.925154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.925361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.925388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.483 [2024-07-13 15:45:17.925571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.483 [2024-07-13 15:45:17.925599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.483 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.925802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.925829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.926044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.926069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.926266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.926294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.926503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.926528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.926691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.926716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.926877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.926903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.927084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.927111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.927274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.927300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.927454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.927496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.927705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.927729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.927901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.927930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.928085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.928111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.928318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.928346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.928556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.928581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.928760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.928788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.928958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.928987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.929166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.929195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.929403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.929428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.929576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.929604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.929777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.929807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.930025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.930055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.930266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.930291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.930478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.930506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.930716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.930744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.930956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.930982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.931167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.931192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.931373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.931401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.931570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.931598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.931764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.931795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.931994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.932019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.932170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.932198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.932396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.932590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.932618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.932825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.932850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.933039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.933067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.933224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.933252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.933427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.933455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.933629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.933654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.933864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.933897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.934077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.484 [2024-07-13 15:45:17.934105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.484 qpair failed and we were unable to recover it. 00:33:47.484 [2024-07-13 15:45:17.934255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.934282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.934465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.934490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.934697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.934725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.934882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.934912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.935117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.935145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.935352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.935376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.935516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.935541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.935694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.935719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.935842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.935875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.936006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.936031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.936245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.936273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.936481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.936509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.936673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.936701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.936882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.936907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.937061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.937089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.937299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.937327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.937529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.937556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.937733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.937759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.937922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.937947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.938075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.938100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.938313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.938341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.938545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.938570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.938722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.938750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.938898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.938927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.939091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.939119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.939261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.939287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.939473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.939501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.939677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.939704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.939914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.939952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.940116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.940141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.940275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.940302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.940506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.940534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.485 qpair failed and we were unable to recover it. 00:33:47.485 [2024-07-13 15:45:17.940732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.485 [2024-07-13 15:45:17.940760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.940947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.940973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.941176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.941204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.941389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.941415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.941614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.941643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.941806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.941833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.942048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.942076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.942252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.942280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.942427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.942454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.942642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.942667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.942827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.942852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.943043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.943071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.943285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.943310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.943461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.943487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.943664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.943693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.943899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.943929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.944100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.944128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.944334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.944359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.944540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.944568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.944740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.944768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.944949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.944977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.945160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.945185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.945320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.945346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.945516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.945541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.945696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.945724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.945935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.945960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.946123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.946151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.946290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.946318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.946490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.946518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.946722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.946747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.946953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.946982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.947189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.947214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.947340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.947365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.947503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.947532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.947703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.947731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.947877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.947906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.948080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.948113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.948257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.948282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.948440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.948488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.948692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.948720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.486 [2024-07-13 15:45:17.948894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.486 [2024-07-13 15:45:17.948923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.486 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.949104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.949129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.949336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.949364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.949536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.949564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.949773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.949799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.949960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.949986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.950167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.950196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.950409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.950437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.950574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.950603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.950784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.950810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.950999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.951027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.951206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.951234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.951436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.951464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.951617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.951642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.951814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.951841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.952004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.952029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.952154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.952195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.952403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.952428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.952611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.952638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.952809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.952836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.953000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.953025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.953189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.953214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.953419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.953446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.953660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.953688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.953883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.953929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.954055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.954080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.954220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.954245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.954397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.954438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.954596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.954623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.954809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.954834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.955022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.955051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.955240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.955267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.955448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.955475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.955685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.955710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.955893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.955922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.956097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.956127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.956328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.956361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.956573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.956598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.956783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.956811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.957003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.957029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.957192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.487 [2024-07-13 15:45:17.957217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.487 qpair failed and we were unable to recover it. 00:33:47.487 [2024-07-13 15:45:17.957404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.957429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.957608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.957636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.957785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.957814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.957990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.958017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.958181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.958206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.958355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.958383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.958561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.958589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.958792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.958820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.958978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.959004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.959173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.959198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.959381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.959408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.959579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.959607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.959783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.959808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.960018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.960046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.960221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.960249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.960448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.960476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.960647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.960672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.960805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.960830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.961020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.961045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.961235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.961263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.961410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.961435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.961615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.961642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.961781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.961809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.961986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.962014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.962192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.962216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.962396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.962424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.962599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.962627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.962770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.962798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.962979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.963004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.963138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.963163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.963325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.963365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.963509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.963536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.963692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.963716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.963878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.963921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.964102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.964129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.964329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.964361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.964543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.964568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.964771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.964799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.964978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.965006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.965208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.965236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.488 [2024-07-13 15:45:17.965409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.488 [2024-07-13 15:45:17.965434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.488 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.965617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.965645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.965848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.965882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.966027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.966054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.966264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.966289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.966471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.966498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.966700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.966727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.966904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.966933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.967120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.967145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.967286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.967311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.967501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.967529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.967693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.967721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.967916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.967942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.968069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.968291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.968319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.968499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.968527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.968744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.968769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.968911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.968936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.969127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.969152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.969309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.969334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.969521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.969548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.969696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.969726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.969909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.969935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.970130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.970155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.970287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.970313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.970440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.970464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.970665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.970693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.970875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.970904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.971082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.971107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.971244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.971269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.971477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.971505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.971689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.971714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.971886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.971912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.972101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.972126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.972297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.972322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.972481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.972510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.972678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.972705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.972871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.972900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.973079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.973107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.973295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.973319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.973475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.489 [2024-07-13 15:45:17.973500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.489 qpair failed and we were unable to recover it. 00:33:47.489 [2024-07-13 15:45:17.973634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.490 [2024-07-13 15:45:17.973660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.490 qpair failed and we were unable to recover it. 00:33:47.490 [2024-07-13 15:45:17.973790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.490 [2024-07-13 15:45:17.973815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.490 qpair failed and we were unable to recover it. 00:33:47.490 [2024-07-13 15:45:17.974039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.490 [2024-07-13 15:45:17.974068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.490 qpair failed and we were unable to recover it. 00:33:47.490 [2024-07-13 15:45:17.974249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.490 [2024-07-13 15:45:17.974274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.490 qpair failed and we were unable to recover it. 00:33:47.490 [2024-07-13 15:45:17.974416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.490 [2024-07-13 15:45:17.974441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.974597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.974622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.974812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.974839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.975034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.975060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.975230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.975256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.975419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.975444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.975620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.975648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.975799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.975824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.976004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.976030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.976190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.976215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.976376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.976401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.976535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.976561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.976765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.976794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.976949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.976976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.977140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.977165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.977350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.977375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.977525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.977552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.977709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.977737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.977932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.977958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.978090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.978115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.978249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.978291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.978474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.978501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.978673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.978701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.978874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.978899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.979060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.979085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.979213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.979238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.979426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.979454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.979662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.979687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.979821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.979845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.980016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.980042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.980206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.980236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.980425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.980450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.980631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.980659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.980814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.980840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.491 qpair failed and we were unable to recover it. 00:33:47.491 [2024-07-13 15:45:17.981010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.491 [2024-07-13 15:45:17.981035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.981219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.981244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.981426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.981454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.981613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.981640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.981820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.981847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.982040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.982066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.982225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.982250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.982395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.982421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.982584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.982609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.982803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.982828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.982999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.983025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.983179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.983205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.983378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.983407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.983610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.983636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.983840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.983880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.984036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.984063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.984243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.984273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.984482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.984508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.984669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.984694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.984871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.984896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.985080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.985105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.985270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.985297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.985461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.985486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.985621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x600480 is same with the state(5) to be set 00:33:47.492 [2024-07-13 15:45:17.985831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.985871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.986057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.986087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.986263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.986296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.986467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.986500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.986703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.986732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.986964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.987003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.987204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.987230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.987413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.987439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.987616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.987643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.987791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.987820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.987982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.988008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.988139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.988164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.988381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.988432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.988645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.988670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.988843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.988874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.989064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.989090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.989233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.989260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.492 qpair failed and we were unable to recover it. 00:33:47.492 [2024-07-13 15:45:17.989424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.492 [2024-07-13 15:45:17.989452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.989618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.989643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.989827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.989852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.990049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.990074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.990263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.990289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.990457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.990482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.990684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.990712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.990889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.990930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.991070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.991096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.991257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.991289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.991477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.991502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.991664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.991689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.991874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.991917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.992081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.992106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.992246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.992270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.992410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.992434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.992618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.992646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.992826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.992851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.993043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.993068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.993245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.993273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.993425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.993452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.993605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.993630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.993777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.993806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.994011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.994036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.994203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.994228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.994356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.994381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.994545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.994570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.994703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.994728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.994884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.994910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.995072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.995097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.995234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.995258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.995418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.995443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.995567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.995592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.995753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.995778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.995915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.995941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.996075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.996100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.996294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.996323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.996477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.996505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.996696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.996722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.996915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.996941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.493 [2024-07-13 15:45:17.997082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.493 [2024-07-13 15:45:17.997108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.493 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.997264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.997289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.997455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.997480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.997638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.997663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.997805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.997831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.997973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.997999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.998179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.998207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.998415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.998440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.998607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.998632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.998790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.998819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.998962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.998997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.999160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.999187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.999322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.999347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.999508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.999534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.999726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.999751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:17.999938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:17.999964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.000150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.000176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.000341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.000366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.000528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.000554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.000714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.000739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.000898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.000924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.001083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.001108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.001238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.001263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.001429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.001473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.001662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.001690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.001927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.001952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.002111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.002137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.002338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.002366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.002571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.002596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.002782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.002809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.002955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.002981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.003121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.003148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.003314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.003339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.003498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.003524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.003679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.003704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.003876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.003902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.004045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.004072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.004259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.004285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.004449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.004475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.004661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.494 [2024-07-13 15:45:18.004691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.494 qpair failed and we were unable to recover it. 00:33:47.494 [2024-07-13 15:45:18.004877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.004904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.005048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.005073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.005293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.005318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.005520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.005545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.005672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.005699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.005909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.005937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.006126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.006151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.006316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.006342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.006531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.006559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.006729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.006759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.006921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.006963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.007142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.007170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.007369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.007395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.007575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.007603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.007783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.007811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.007996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.008023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.008157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.008182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.008316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.008341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.008502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.008528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.008682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.008863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.008913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.009072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.009099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.009223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.009249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.009421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.009446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.009583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.009611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.009743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.009768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.009930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.009957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.010136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.010161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.010316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.010342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.010497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.010523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.010650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.010676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.010863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.010894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.011024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.011049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.011218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.011243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.495 [2024-07-13 15:45:18.011416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.495 [2024-07-13 15:45:18.011444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.495 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.011609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.011635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.011849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.011895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.012095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.012122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.012288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.012315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.012502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.012546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.012735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.012780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.012947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.012974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.013134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.013160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.013377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.013423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.013625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.013669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.013834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.013860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.014033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.014077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.014298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.014341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.014523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.014554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.014707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.014740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.014888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.014915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.015129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.015157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.015368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.015398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.015601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.015646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.015822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.015847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.016073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.016117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.016324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.016353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.016560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.016606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.016797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.016823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.016996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.017041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.017254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.017296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.017512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.017556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.017726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.017751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.017957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.018003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.018181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.018227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.018385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.018428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.018639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.018682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.018847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.018878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.019086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.019115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.019321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.019364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.019561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.019607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.019797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.019823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.020013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.020058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.020248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.020294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.020511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.496 [2024-07-13 15:45:18.020554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.496 qpair failed and we were unable to recover it. 00:33:47.496 [2024-07-13 15:45:18.020743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.020769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.020969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.021013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.021229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.021274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.021495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.021537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.021677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.021704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.021896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.021923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.022120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.022166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.022349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.022393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.022553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.022603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.022794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.022820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.022982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.023025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.023197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.023240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.023462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.023504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.023639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.023667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.023806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.023836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.024019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.024062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.024250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.024295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.024522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.024566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.024734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.024760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.024974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.025003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.025231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.025272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.025493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.025536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.025668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.025694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.025892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.025918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.026109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.026154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.026349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.026378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.026578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.026625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.026761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.026789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.026997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.027041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.027224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.027266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.027488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.027532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.027697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.027725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.027940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.027983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.028174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.028218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.028415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.028444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.028647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.028674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.028836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.028862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.029057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.029100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.029266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.029330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.497 [2024-07-13 15:45:18.029502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.497 [2024-07-13 15:45:18.029546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.497 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.029736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.029761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.029946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.029995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.030184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.030227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.030389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.030432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.030567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.030593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.030761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.030787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.030993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.031037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.031225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.031270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.031482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.031525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.031718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.031744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.031955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.031999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.032210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.032254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.032475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.032517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.032689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.032715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.032931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.032974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.033194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.033238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.033432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.033477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.033650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.033675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.033838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.033869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.034042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.034087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.034294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.034337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.034523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.034553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.034742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.034770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.034967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.035011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.035196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.035239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.035430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.035473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.035668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.035694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.035858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.035905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.036128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.036172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.036329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.036372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.036559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.036602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.036792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.036821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.037042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.037087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.037285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.037316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.037543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.037585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.037751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.037777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.037932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.037958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.038154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.038202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.038362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.038406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.038618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.038661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.498 qpair failed and we were unable to recover it. 00:33:47.498 [2024-07-13 15:45:18.038837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.498 [2024-07-13 15:45:18.038863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.039033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.039081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.039272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.039322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.039534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.039577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.039759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.039784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.039965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.040009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.040222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.040266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.040451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.040495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.040676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.040703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.040839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.040870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.041064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.041107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.041331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.041373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.041563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.041605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.041796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.041821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.042011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.042056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.042249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.042293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.042465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.042508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.042674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.042701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.042840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.042880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.043069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.043112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.043297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.043342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.043563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.043606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.043750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.043778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.043991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.044036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.044264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.044307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.044521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.044564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.044703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.044729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.044909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.044953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.045172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.045214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.045393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.045439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.045611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.045637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.045772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.045798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.045987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.046032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.046218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.046261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.046466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.499 [2024-07-13 15:45:18.046495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.499 qpair failed and we were unable to recover it. 00:33:47.499 [2024-07-13 15:45:18.046661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.046690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.046856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.046888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.047104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.047151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.047362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.047406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.047589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.047634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.047826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.047852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.048023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.048070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.048283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.048329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.048564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.048608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.048748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.048774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.048931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.048976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.049202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.049245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.049432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.049476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.049660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.049687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.049853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.049890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.050106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.050149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.050333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.050375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.050566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.050610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.050781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.050807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.050999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.051045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.051232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.051276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.051431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.051461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.051692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.051735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.051952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.052003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.052234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.052277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.052492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.052534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.052705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.052731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.052901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.052928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.053141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.053171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.053406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.053671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.053714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.053863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.053906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.054095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.054142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.054361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.054404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.054625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.054666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.054829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.054855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.055020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.055045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.055236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.055279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.055453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.055498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.055710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.055754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.500 qpair failed and we were unable to recover it. 00:33:47.500 [2024-07-13 15:45:18.055969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.500 [2024-07-13 15:45:18.056012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.056227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.056270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.056429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.056473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.056692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.056736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.056877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.056903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.057098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.057139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.057352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.057400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.057577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.057622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.057772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.057802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.057995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.058039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.058238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.058283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.058478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.058522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.058702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.058728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.058890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.058916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.059108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.059154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.059336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.059379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.059588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.059632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.059763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.059790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.059973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.060017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.060230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.060273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.060490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.060531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.060686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.060711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.060853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.060885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.061069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.061114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.061282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.061326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.061486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.061531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.061693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.061720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.061919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.061945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.062155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.062198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.062363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.062407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.062542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.062570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.062725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.062751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.062932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.062975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.063147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.063195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.063376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.063420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.063603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.063629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.063763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.063789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.063980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.064024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.064237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.064279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.064466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.064510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.064692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.501 [2024-07-13 15:45:18.064717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.501 qpair failed and we were unable to recover it. 00:33:47.501 [2024-07-13 15:45:18.064916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.064945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.065144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.065188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.065342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.065384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.065550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.065575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.065745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.065771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.065957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.066005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.066193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.066221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.066420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.066463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.066616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.066642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.066833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.066859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.067051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.067094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.067304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.067347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.067533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.067577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.067777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.067803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.068010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.068052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.068233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.068281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.068487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.068529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.068689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.068714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.068906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.068932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.069091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.069134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.069350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.069392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.069617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.069660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.069824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.069850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.069988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.070014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.070224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.070268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.070423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.070467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.070654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.070698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.070858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.070890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.071056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.071082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.071267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.071309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.071495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.071538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.071698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.071723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.071935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.071980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.072170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.072214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.072433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.072476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.072686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.072741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.072918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.072947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.073161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.073204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.073422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.073467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.073654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.073684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.502 [2024-07-13 15:45:18.073880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.502 [2024-07-13 15:45:18.073916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.502 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.074125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.074172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.074414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.074461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.074644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.074674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.074886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.074922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.075103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.075154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.075355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.075394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.075556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.075587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.075763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.076016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.076046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.076223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.076252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.076445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.076470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.076668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.076696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.076887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.076922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.077086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.077112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.077319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.077347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.077524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.077551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.077733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.077762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.077949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.077975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.078166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.078194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.078349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.078376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.078522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.078550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.078753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.078781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.078934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.078959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.079124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.079149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.079313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.079338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.079478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.079504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.079696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.079721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.079883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.079909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.080078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.080104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.080267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.080292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.080424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.080449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.080612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.080637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.080824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.080852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.081072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.081098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.081230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.081256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.081488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.081515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.081717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.081745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.081936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.081961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.082144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.082169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.082351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.082376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.503 qpair failed and we were unable to recover it. 00:33:47.503 [2024-07-13 15:45:18.082540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.503 [2024-07-13 15:45:18.082565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.082749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.082777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.082965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.082991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.083119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.083144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.083352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.083399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.083604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.083632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.083834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.083862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.084027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.084053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.084239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.084267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.084468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.084496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.084695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.084724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.084947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.084973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.085117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.085142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.085322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.085350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.085522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.085550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.085687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.085715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.085893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.085920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.086075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.086100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.086232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.086258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.086498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.086547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.086704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.086732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.086927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.086953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.087118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.087143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.087323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.087351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.087529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.087557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.087757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.087785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.087974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.088001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.088196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.088221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.088446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.088499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.088703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.088730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.088922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.088949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.089137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.089162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.089297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.089326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.089514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.089539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.504 [2024-07-13 15:45:18.089700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.504 [2024-07-13 15:45:18.089728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.504 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.089945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.089971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.090105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.090131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.090339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.090393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.090592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.090620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.090822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.090850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.091043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.091069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.091257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.091282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.091422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.091447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.091651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.091683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.091874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.091903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.092050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.092075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.092242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.092267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.092429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.092454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.092620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.092645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.092826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.092853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.093072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.093098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.093283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.093308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.093468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.093493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.093654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.093679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.093819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.093844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.093982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.094008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.094195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.094220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.094343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.094368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.094500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.094526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.094713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.094738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.094929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.094956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.095097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.095123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.095280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.095305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.095442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.095467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.095625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.095651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.095838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.095863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.096025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.096051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.096187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.096212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.096349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.096374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.096499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.096524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.096681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.096706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.096875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.096901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.097088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.097113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.097281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.097306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.097467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.097493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.505 [2024-07-13 15:45:18.097625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.505 [2024-07-13 15:45:18.097650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.505 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.097837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.097862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.098040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.098066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.098224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.098249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.098386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.098411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.098568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.098593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.098756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.098783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.098947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.098973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.099132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.099158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.099318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.099343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.099495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.099520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.099686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.099711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.099883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.099910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.100038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.100064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.100248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.100273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.100403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.100428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.100589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.100616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.100774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.100799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.100962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.100988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.101127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.101152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.101338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.101363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.101535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.101560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.101720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.101745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.101935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.101961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.102091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.102116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.102297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.102326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.102490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.102515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.102707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.102732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.102896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.102922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.103052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.103077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.103230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.103255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.103416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.103442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.103599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.103624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.103785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.103811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.103943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.103969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.104125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.104150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.104333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.104358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.104543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.104568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.104695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.104720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.104912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.104938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.105066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.105091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.506 [2024-07-13 15:45:18.105252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.506 [2024-07-13 15:45:18.105278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.506 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.105432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.105457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.105624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.105649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.105803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.105829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.106062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.106089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.109042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.109082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.109260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.109287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.109480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.109505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.109673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.109699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.109863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.109895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.110055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.110080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.110209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.110234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.110396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.110421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.110574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.110599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.110756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.110783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.110993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.111019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.111156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.111181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.111340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.111365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.111546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.111571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.111755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.111781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.111965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.111991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.112151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.112176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.112306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.112331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.112467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.112493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.112656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.112681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.112811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.112841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.113033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.113059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.113213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.113238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.113422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.113447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.113582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.113607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.113766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.113791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.113926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.113951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.114085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.114112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.114275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.114301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.114436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.114461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.114601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.114626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.114766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.114791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.114952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.114979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.115166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.115191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.115385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.115411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.115597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.507 [2024-07-13 15:45:18.115622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.507 qpair failed and we were unable to recover it. 00:33:47.507 [2024-07-13 15:45:18.115782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.115808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.115980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.116006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.116143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.116169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.116296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.116325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.116511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.116536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.116674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.116701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.116856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.116892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.117044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.117069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.117266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.117290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.117482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.117507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.117636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.117661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.117848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.117883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.118028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.118053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.118227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.118266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.118431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.118458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.118585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.118610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.118806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.118832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.118980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.119006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.119163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.119188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.119351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.119376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.119509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.119534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.119667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.119692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.119882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.119917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.120082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.120107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.120273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.120297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.120478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.120504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.120644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.120670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.120828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.120853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.121078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.121104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.121269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.121294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.121477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.121502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.121663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.121688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.121875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.121901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.122035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.122060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.508 [2024-07-13 15:45:18.122213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.508 [2024-07-13 15:45:18.122238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.508 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.122464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.122489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.122616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.122641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.122826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.122850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.122999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.123192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.123217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.123373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.123398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.123527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.123552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.123741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.123766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.123927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.123953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.124091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.124117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.124271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.124297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.124427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.124452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.124673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.124697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.124873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.124899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.125039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.125064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.125188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.125213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.125436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.125461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.125617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.125652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.125806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.125831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.125991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.126016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.126142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.126167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.126302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.126327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.126479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.126504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.126641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.126668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.126836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.126861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.127014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.127039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.127208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.127233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.127390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.127414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.127554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.127579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.127706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.127731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.127890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.127923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.128069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.128278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.128303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.128444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.128468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.128654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.128679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.128835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.128860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.129026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.129051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.129188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.129215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.129352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.129379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.129532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.129557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.509 [2024-07-13 15:45:18.129713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.509 [2024-07-13 15:45:18.129738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.509 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.129904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.129931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.130070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.130096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.130226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.130251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.130405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.130434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.130567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.130592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.130774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.130799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.130965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.130990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.131126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.131151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.131310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.131336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.131501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.131525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.131678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.131703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.131864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.131896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.132060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.132085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.132248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.132273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.132404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.132429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.132624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.132649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.132788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.132813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.132991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.133017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.133179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.133204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.133346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.133371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.133557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.133581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.133719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.133744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.133903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.133929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.134126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.134151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.134288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.134312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.134473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.134498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.134659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.134684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.134848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.134881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.135010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.135035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.135194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.135218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.135399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.135424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.135557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.135582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.135734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.135758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.135926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.135952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.136110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.136135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.136320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.136344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.136491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.136516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.136673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.136698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.136827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.136851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.136992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.510 [2024-07-13 15:45:18.137017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.510 qpair failed and we were unable to recover it. 00:33:47.510 [2024-07-13 15:45:18.137171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.137196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.137354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.137378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.137563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.137588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.137772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.137797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.137954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.137989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.138129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.138154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.138342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.138366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.138529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.138554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.138720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.138745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.138899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.138924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.139089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.139114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.139272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.139297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.139459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.139484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.139671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.139696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.139851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.139882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.140043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.140068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.140228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.140252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.140436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.140461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.140602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.140627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.140782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.140806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.140985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.141011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.141153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.141177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.141313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.141338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.141503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.141528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.141697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.141721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.141956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.141991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.142132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.142157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.142320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.142345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.142527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.142552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.142712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.142736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.142902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.142928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.143141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.143301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.143326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.143483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.143508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.143650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.143674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.143829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.143854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.144029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.144055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.144212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.144237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.144403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.144428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.144590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.144615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.511 [2024-07-13 15:45:18.144780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.511 [2024-07-13 15:45:18.144805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.511 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.144970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.144997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.145141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.145166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.145328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.145354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.145493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.145519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.145715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.145750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.145889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.145916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.146076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.146101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.146256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.146282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.146468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.146493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.146653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.146678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.146832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.146857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.147041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.147067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.147198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.147223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.147386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.147411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.147570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.147595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.147749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.147773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.147913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.147939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.148128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.148154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.148344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.148370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.148528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.148554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.148741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.148768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.148921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.148947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.149138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.149164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.149328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.149353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.149511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.149537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.149701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.149726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.149885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.149911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.150104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.150129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.150303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.150327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.150464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.150489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.150622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.150648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.150812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.150841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.151010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.151036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.151168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.151193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.151357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.151383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.512 qpair failed and we were unable to recover it. 00:33:47.512 [2024-07-13 15:45:18.151566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.512 [2024-07-13 15:45:18.151591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.151750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.151775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.151935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.151961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.152121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.152146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.152283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.152308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.152448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.152473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.152605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.152630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.152787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.152812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.152945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.152971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.153156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.153182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.153356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.153381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.153542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.153567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.153756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.153781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.153940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.153966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.154128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.154153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.154335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.154360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.154546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.154571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.154707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.154732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.154933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.154960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.155123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.155150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.155342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.155368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.155497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.155522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.155685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.155711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.155899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.155925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.156114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.156140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.156304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.156330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.156458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.156483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.156637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.156663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.156794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.156819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.156981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.157006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.157163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.157188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.157313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.157339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.157501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.157526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.157687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.157712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.157893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.157920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.158085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.158110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.158251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.158276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.158442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.158468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.158628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.158653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.158779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.158804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.158965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.158991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.513 [2024-07-13 15:45:18.159132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.513 [2024-07-13 15:45:18.159159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.513 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.159349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.159375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.159535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.159561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.159714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.159740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.159923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.159949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.160111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.160136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.160301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.160326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.160478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.160503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.160639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.160664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.160828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.160853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.161018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.161044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.161201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.161227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.161391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.161416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.161573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.161599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.161761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.161788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.161926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.161954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.162089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.162115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.162248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.162273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.162405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.162430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.162614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.162639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.162792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.162818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.162958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.162985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.163172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.163197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.163354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.163385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.163550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.163576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.163702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.163730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.163887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.163913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.164050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.164077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.164265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.164290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.164443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.164468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.164602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.164627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.164808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.164836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.165030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.165056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.165218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.165243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.165412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.165437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.165589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.165615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.165752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.165777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.165945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.165970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.166110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.166135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.166288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.166313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.166476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.166501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.514 [2024-07-13 15:45:18.166658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.514 [2024-07-13 15:45:18.166683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.514 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.166836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.166861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.167028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.167195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.167222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.167388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.167414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.167544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.167569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.167754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.167779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.167938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.167964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.168121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.168146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.168332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.168357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.168550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.168578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.168738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.168765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.168930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.168956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.169095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.169122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.169275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.169301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.169441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.169471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.169656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.169686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.169894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.169925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.170100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.170130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.170316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.170345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.170499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.170533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.170728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.170773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.170960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.170988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.171163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.171194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.171414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.171445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.171635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.171664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.171922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.171953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.172166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.172201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.172386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.172415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.172650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.172684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.172890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.172941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.173103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.173133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.173407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.173441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.173643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.173675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.173880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.173928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.174083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.174122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.174317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.174348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.174554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.174583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.174762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.174789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.174976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.175003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.175166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.175209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.515 [2024-07-13 15:45:18.175361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.515 [2024-07-13 15:45:18.175393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.515 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.175604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.175648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.175802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.175828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.176006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.176049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.176203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.176247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.176465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.176509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.176662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.176688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.176855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.176889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.177081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.177125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.177342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.177371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.177548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.177592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.177753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.177780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.177965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.178010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.178166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.178210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.178389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.178432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.178588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.178615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.178775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.178801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.178954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.179000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.179198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.179242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.179421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.179464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.179656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.179682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.179873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.179899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.180070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.180118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.180326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.180369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.180592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.180633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.180794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.180820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.180974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.181024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.181213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.181258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.181433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.181477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.181638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.181664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.181830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.181856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.182025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.182069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.182259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.182303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.182464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.182506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.182641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.182667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.182811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.182837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.183008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.183053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.183209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.183253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.183414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.183459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.183642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.183668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.183800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.516 [2024-07-13 15:45:18.183826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.516 qpair failed and we were unable to recover it. 00:33:47.516 [2024-07-13 15:45:18.184014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.184059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.184223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.184252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.184481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.184524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.184682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.184708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.184871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.184898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.185070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.185114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.185325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.185354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.185587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.185630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.185819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.185844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.186006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.186032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.186187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.186229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.186414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.186457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.186611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.186639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.186788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.186813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.187002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.187048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.187235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.187264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.187433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.187477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.187666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.187708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.187910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.187936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.188123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.188167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.188347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.188375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.188609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.188655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.188816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.188842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.189023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.189050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.189239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.189282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.189499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.189543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.189699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.189725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.189906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.189951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.190115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.190149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.190392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.190435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.190592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.190635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.190794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.190820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.191027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.191072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.191269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.191311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.517 [2024-07-13 15:45:18.191511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.517 [2024-07-13 15:45:18.191537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.517 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.191703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.191729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.191943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.191989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.192156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.192202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.192362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.192404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.192588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.192631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.192796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.192821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.192984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.193029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.193245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.193288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.193514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.193557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.193719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.193745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.193936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.193965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.194161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.194204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.194412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.194455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.194647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.194673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.194859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.194895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.195057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.195100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.195278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.195321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.195501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.195543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.195730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.195755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.195941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.195970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.196140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.196184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.196347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.196373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.196531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.196574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.196762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.196788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.196949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.197156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.197199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.197409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.197456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.197598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.197624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.197790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.197816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.198009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.198053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.198240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.198285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.198450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.198479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.198655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.198682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.198847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.198885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.199075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.199119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.199307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.199350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.199534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.199577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.199739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.199765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.199947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.199992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.518 [2024-07-13 15:45:18.200149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.518 [2024-07-13 15:45:18.200192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.518 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.200406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.200434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.200568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.200596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.200767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.200793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.200956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.201000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.201190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.201216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.201428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.201456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.201626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.201786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.201811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.201970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.202017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.202207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.202249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.202459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.202502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.202684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.202710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.202877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.202921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.203083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.203126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.203309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.203351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.203518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.203561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.203727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.203754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.203961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.204005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.204223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.204266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.204428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.204455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.204650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.204676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.204818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.204845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.205064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.205102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.205342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.205375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.205551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.205591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.205803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.205833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.206006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.206035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.206274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.206306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.206562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.206594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.206772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.206805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.207022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.207051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.207228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.207267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.207471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.207515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.207725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.207773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.207940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.207966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.208156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.208202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.208420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.208463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.208642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.208685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.208877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.208903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.209051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.519 [2024-07-13 15:45:18.209076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.519 qpair failed and we were unable to recover it. 00:33:47.519 [2024-07-13 15:45:18.209288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.209317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.209548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.209591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.209756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.209781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.209950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.209977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.210202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.210245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.210457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.210500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.210641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.210668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.210831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.210857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.211054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.211098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.211281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.211324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.211476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.211520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.211688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.211715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.211850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.211885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.212052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.212101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.212317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.212360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.212600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.212644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.212806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.212832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.213007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.213052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.213241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.213284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.213436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.213480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.213637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.213662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.213820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.213846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.214039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.214081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.214298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.214341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.214522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.214566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.214705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.214732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.214921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.214951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.215188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.215231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.215408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.215452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.215619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.215644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.215805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.215831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.216031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.216075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.216299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.216343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.216564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.216607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.216734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.216761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.216951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.217210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.217252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.217437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.217480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.217644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.217669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.217842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.520 [2024-07-13 15:45:18.217875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.520 qpair failed and we were unable to recover it. 00:33:47.520 [2024-07-13 15:45:18.218066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.218113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.218304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.218350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.218602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.218656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.218858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.218902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.219081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.219111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.219274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.219303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.219514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.219562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.219753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.219783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.219977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.220029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.220213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.220257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.220419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.220465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.220622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.220659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.220802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.220828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.221001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.221031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.221195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.221220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.221389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.221414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.221560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.221605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.221803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.221829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.221989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.222014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.222172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.222198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.222339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.222364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.222554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.222584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.222784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.222818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.223029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.223058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.223210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.223240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.223426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.223455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.223630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.223663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.223840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.223877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.224071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.224100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.224276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.224305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.224509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.224540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.224746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.224779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.224963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.224993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.521 [2024-07-13 15:45:18.225180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.521 [2024-07-13 15:45:18.225209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.521 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.225423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.225453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.225663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.225692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.225882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.225912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.226093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.226132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.226285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.226314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.226517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.226549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.226782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.226810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.227009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.227039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.227226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.227256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.227478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.227524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.227746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.227783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.227969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.228006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.228162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.228204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.228386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.228415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.228576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.228602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.228794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.228822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.228978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.229012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.812 [2024-07-13 15:45:18.229190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.812 [2024-07-13 15:45:18.229215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.812 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.229375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.229419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.229721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.229767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.229960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.229992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.230156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.230191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.230408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.230453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.230689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.230718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.230900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.230927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.231084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.231109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.231246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.231271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.231460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.231644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.231670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.231825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.231850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.232020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.232046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.232189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.232213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.232380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.232405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.232568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.232594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.232727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.232752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.232887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.232913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.233068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.233261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.233285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.233448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.233473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.233630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.233656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.233823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.233848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.233989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.234015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.234178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.234202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.234369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.234394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.234556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.234581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.234742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.234767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.234928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.234954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.235113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.235142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.235277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.235302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.235486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.235511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.235675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.235700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.235864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.235899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.236086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.236112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.236275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.236301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.236467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.236492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.236648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.236674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.236837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.236862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.237088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.237114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.813 qpair failed and we were unable to recover it. 00:33:47.813 [2024-07-13 15:45:18.237271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.813 [2024-07-13 15:45:18.237297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.237480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.237506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.237636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.237661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.237829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.237854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.238025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.238051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.238216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.238241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.238427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.238452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.238582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.238607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.238747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.238772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.238904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.238930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.239086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.239111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.239275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.239300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.239436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.239461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.239586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.239611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.239761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.239786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.239960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.239986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.240119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.240150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.240297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.240322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.240457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.240482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.240613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.240638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.240780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.240805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.240952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.240978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.241145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.241170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.241325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.241350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.241541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.241566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.241715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.241740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.241901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.241927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.242088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.242113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.242269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.242294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.242456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.242481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.242667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.242697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.242887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.242913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.243078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.243103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.243238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.243262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.243426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.243451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.243626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.243658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.243824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.243849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.244007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.244032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.244162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.244187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.244348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.244373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.244516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.814 [2024-07-13 15:45:18.244552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.814 qpair failed and we were unable to recover it. 00:33:47.814 [2024-07-13 15:45:18.244706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.244731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.244927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.244953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.245114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.245139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.245305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.245330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.245493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.245518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.245682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.245706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.245846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.245885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.246064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.246090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.246251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.246276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.246440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.246464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.246648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.246674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.246861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.246894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.247057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.247082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.247245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.247269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.247409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.247434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.247600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.247625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.247805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.247830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.248004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.248029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.248159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.248184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.248339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.248365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.248498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.248524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.248685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.248713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.248971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.248998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.249167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.249192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.249348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.249373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.249535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.249560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.249726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.249751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.249912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.249938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.250103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.250128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.250311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.250336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.250480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.250506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.250666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.250691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.250857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.250888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.251028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.251052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.251240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.251266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.251431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.251456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.251617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.251642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.251808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.251834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.252014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.252041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.252213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.252238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.815 [2024-07-13 15:45:18.252366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.815 [2024-07-13 15:45:18.252392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.815 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.252554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.252579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.252744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.252774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.252924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.252950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.253117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.253148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.253332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.253357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.253545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.253570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.253726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.253751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.253924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.253950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.254084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.254109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.254269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.254293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.254482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.254506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.254639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.254664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.254798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.254823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.254962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.254987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.255146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.255171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.255329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.255355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.255517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.255546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.255669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.255694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.255856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.255893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.256020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.256045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.256208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.256232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.256350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.256375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.256510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.256535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.256688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.256713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.256879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.256904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.257061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.257086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.257252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.257277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.257451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.257475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.257612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.257637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.257819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.257844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.257988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.258013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.258182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.258209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.258346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.258381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.816 [2024-07-13 15:45:18.258520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.816 [2024-07-13 15:45:18.258547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.816 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.258716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.258741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.258901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.258927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.259059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.259084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.259223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.259248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.259453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.259478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.259631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.259656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.259819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.259844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.260002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.260029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.260166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.260191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.260345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.260370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.260503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.260528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.260665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.260689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.260853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.260898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.261028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.261054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.261213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.261238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.261400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.261425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.261591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.261616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.261777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.261801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.261945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.261972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.262130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.262156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.262319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.262345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.262508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.262533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.262668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.262693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.262824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.262852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.263041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.263066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.263240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.263265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.263427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.263451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.263590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.263617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.263769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.263794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.263964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.263991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.264143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.264168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.264350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.264375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.264506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.264531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.264686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.264711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.264848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.264885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.265079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.265104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.265248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.265273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.265435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.265460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.265623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.265648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.265819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.265845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.817 [2024-07-13 15:45:18.266014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.817 [2024-07-13 15:45:18.266056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.817 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.266258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.266290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.266507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.266555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.266788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.266839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.267024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.267055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.267274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.267321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.267593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.267643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.267844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.267875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.268064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.268089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.268283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.268310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.268476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.268508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.268660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.268687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.268897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.268928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.269091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.269115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.269249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.269274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.269454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.269482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.269629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.269656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.269833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.269858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.270015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.270040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.270189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.270214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.270353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.270378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.270598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.270625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.270812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.270836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.271006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.271031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.271200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.271226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.271392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.271417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.271587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.271612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.271762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.271790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.271973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.271998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.272164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.272189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.272347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.272372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.272533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.272558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.272719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.272743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.272956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.272983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.273134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.273160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.273325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.273350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.273585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.273631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.273787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.273815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.273960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.273986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.274129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.274154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.274312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.274337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.818 [2024-07-13 15:45:18.274534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.818 [2024-07-13 15:45:18.274559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.818 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.274759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.274787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.275002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.275028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.275159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.275184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.275368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.275393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.275556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.275582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.275741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.275768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.275923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.275948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.276111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.276135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.276289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.276315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.276515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.276568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.276765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.276793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.276971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.276997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.277147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.277172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.277357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.277383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.277520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.277556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.277742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.277772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.277969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.277995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.278124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.278149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.278337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.278363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.278527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.278553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.278688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.278713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.278877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.278903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.279091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.279116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.279310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.279336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.279525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.279553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.279759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.279787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.279969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.279995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.280179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.280204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.280365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.280390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.280527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.280553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.280736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.280764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.280968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.280995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.281149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.281175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.281372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.281398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.281565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.281589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.281746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.281771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.281961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.281991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.282144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.282169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.282336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.282362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.282522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.282550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.819 [2024-07-13 15:45:18.282726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.819 [2024-07-13 15:45:18.282754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.819 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.282965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.282991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.283149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.283173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.283359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.283384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.283550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.283575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.283760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.283788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.283965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.283990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.284173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.284197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.284382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.284431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.284634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.284662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.284839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.284872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.285017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.285043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.285192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.285218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.285387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.285413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.285579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.285607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.285766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.285794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.286002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.286028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.286182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.286207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.286370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.286395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.286555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.286580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.286752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.286983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.287009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.287167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.287205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.287422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.287471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.287654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.287682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.287858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.287893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.288072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.288097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.288270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.288295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.288456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.288631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.288662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.288874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.288923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.289098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.289123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.289291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.289323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.289532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.289562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.289739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.289767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.289928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.289953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.290116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.290142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.290302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.290342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.290533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.290567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.290805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.290855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.291025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.291053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.820 [2024-07-13 15:45:18.291279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.820 [2024-07-13 15:45:18.291316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.820 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.291515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.291544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.291695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.291739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.291915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.291941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.292065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.292090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.292255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.292280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.292458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.292483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.292644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.292669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.292857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.292894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.293038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.293065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.293269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.293294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.293478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.293503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.293660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.293685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.293844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.293877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.294048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.294073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.294240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.294265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.294427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.294452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.294605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.294630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.295440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.295473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.295679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.295708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.295881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.295919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.296085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.296110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.296247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.296272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.296457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.296486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.296652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.296678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.296810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.296835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.297018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.297045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.297188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.297213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.297376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.297401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.297556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.297581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.297741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.297767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.297898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.297934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.298121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.298146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.298282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.298307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.298491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.298516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.298645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.298671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.821 qpair failed and we were unable to recover it. 00:33:47.821 [2024-07-13 15:45:18.298857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.821 [2024-07-13 15:45:18.298891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.299066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.299091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.299225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.299250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.299408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.299433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.299563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.299588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.299771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.299796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.299924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.299950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.300087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.300111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.300277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.300302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.300443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.300468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.300637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.300662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.300783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.300808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.300964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.300990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.301131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.301156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.301318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.301347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.301517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.301542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.301708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.301733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.301884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.301916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.302073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.302097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.302287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.302312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.302473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.302498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.302661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.302686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.302846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.302877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.303030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.303055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.303192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.303217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.303373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.303398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.303585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.303610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.303769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.303794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.303936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.303962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.304120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.304145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.304329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.304355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.304513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.304539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.304730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.304755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.304902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.304939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.305095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.305120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.305281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.305306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.305471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.305495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.305683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.305708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.305833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.305858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.306043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.306067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.306214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.822 [2024-07-13 15:45:18.306240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.822 qpair failed and we were unable to recover it. 00:33:47.822 [2024-07-13 15:45:18.306368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.306393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.306553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.306577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.306736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.306760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.306928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.306953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.307115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.307140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.307303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.307327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.307511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.307536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.307698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.307726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.307939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.307965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.308089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.308113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.308271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.308296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.308430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.308457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.308641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.308666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.308827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.308851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.309023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.309053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.309226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.309251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.309407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.309432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.309563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.309587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.309722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.309747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.309916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.309941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.310077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.310102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.310299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.310323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.310508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.310533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.310672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.310696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.310831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.310855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.311027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.311054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.311200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.311224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.311408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.311433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.311607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.311632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.311770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.311795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.311961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.311987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.312144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.312169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.312305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.312330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.312516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.312540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.312672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.312697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.312844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.312879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.313040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.313065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.313224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.313249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.313435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.313460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.313595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.313620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.313785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.313809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.823 qpair failed and we were unable to recover it. 00:33:47.823 [2024-07-13 15:45:18.314017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.823 [2024-07-13 15:45:18.314047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.314207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.314232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.314394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.314418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.314567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.314591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.314752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.314777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.314936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.314963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.315096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.315121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.315280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.315305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.315467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.315492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.315652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.315676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.315831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.315856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.316028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.316053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.316213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.316237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.316388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.316412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.316581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.316607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.316747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.316772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.316928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.316954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.317117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.317145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.317280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.317304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.317470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.317494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.317680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.317704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.317860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.317892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.318034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.318059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.318198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.318223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.318395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.318420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.318582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.318607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.318767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.318791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.318949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.318975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.319109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.319134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.319295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.319320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.319504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.319529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.319682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.319707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.319875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.319902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.320031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.320055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.320202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.320226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.320385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.320410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.320567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.320592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.320763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.320787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.320955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.320982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.321116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.321141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.321280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.321304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.824 qpair failed and we were unable to recover it. 00:33:47.824 [2024-07-13 15:45:18.321462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.824 [2024-07-13 15:45:18.321491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.321673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.321698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.321862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.321906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.322043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.322068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.322252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.322277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.322439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.322467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.322631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.322656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.322846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.322877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.323020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.323046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.323240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.323265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.323426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.323451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.323607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.323631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.323789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.323815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.323951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.323976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.324174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.324199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.324337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.324362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.324526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.324551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.324710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.324737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.325026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.325053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.325214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.325238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.325377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.325402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.325585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.325610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.325773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.325798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.325959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.325984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.326169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.326193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.326384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.326409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.326542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.326566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.326731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.326756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.326922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.326947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.327087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.327112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.327307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.327332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.327486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.327510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.327681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.327706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.327837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.327862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.328039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.328063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.328223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.328247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.825 qpair failed and we were unable to recover it. 00:33:47.825 [2024-07-13 15:45:18.328433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.825 [2024-07-13 15:45:18.328458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.328616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.328641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.328795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.328820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.328997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.329023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.329213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.329239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.329408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.329433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.329620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.329644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.329803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.329827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.330009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.330034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.330164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.330189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.330346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.330371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.330527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.330551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.330711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.330736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.330895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.330930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.331105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.331129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.331266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.331291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.331477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.331502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.331687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.331714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.332010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.332035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.332229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.332255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.332418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.332443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.332599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.332624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.332776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.332801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.332961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.332986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.333155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.333182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.333344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.333369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.333529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.333553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.333715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.333739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.333903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.333929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.334096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.334121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.334258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.334282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.334447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.334472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.334596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.334624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.334766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.334791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.334955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.334982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.335170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.335194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.335378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.335403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.335564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.335589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.335754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.335779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.335934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.335960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.336104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.336128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.826 [2024-07-13 15:45:18.336288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.826 [2024-07-13 15:45:18.336312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.826 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.336501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.336526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.336691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.336716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.336852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.336897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.337068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.337092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.337256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.337281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.337422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.337446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.337628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.337653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.337846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.337875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.338052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.338077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.338212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.338237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.338426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.338451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.338585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.338609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.338769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.338811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.339025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.339051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.339235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.339260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.339450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.339475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.339608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.339633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.339793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.339817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.339986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.340011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.340174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.340198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.340342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.340367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.340529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.340554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.340714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.340739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.340896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.340922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.341097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.341122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.341279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.341304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.341487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.341512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.341682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.341707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.341890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.341915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.342070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.342095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.342283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.342307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.342498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.342526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.342666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.342691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.342844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.342877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.343057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.343082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.343240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.343265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.343432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.343459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.343622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.343646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.343830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.343855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.344000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.344024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.827 qpair failed and we were unable to recover it. 00:33:47.827 [2024-07-13 15:45:18.344150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.827 [2024-07-13 15:45:18.344175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.344312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.344337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.344524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.344548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.344734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.344759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.344915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.344941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.345069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.345093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.345253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.345277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.345439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.345464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.345624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.345648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.345819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.345844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.345994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.346018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.346182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.346207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.346371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.346396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.346558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.346582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.346719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.346745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.346911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.346936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.347074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.347098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.347282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.347307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.347444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.347473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.347637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.347662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.347824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.347847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.347983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.348008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.348166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.348190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.348345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.348369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.348562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.348587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.348719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.348743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.348897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.348923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.349077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.349102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.349256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.349282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.349420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.349444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.349605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.349630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.349760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.349784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.349922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.349948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.350112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.350137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.350277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.350301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.350470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.350494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.350634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.350658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.350819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.350843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.351030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.351056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.351193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.351217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.351404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.351427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.828 [2024-07-13 15:45:18.351567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.828 [2024-07-13 15:45:18.351591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.828 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.351730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.351754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.351918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.351943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.352081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.352105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.352242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.352266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.352434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.352458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.352621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.352645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.352808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.352833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.352978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.353005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.353193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.353217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.353347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.353371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.353535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.353560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.353699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.353723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.353853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.353883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.354047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.354072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.354234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.354258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.354389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.354414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.354573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.354598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.354731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.354758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.354897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.354923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.355088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.355113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.355255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.355279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.355438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.355462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.355621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.355645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.355811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.355836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.355976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.356001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.356155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.356181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.356379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.356404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.356564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.356589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.356749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.356774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.356929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.356956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.357147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.357173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.357314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.357338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.357506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.357531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.357694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.357888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.829 [2024-07-13 15:45:18.357914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.829 qpair failed and we were unable to recover it. 00:33:47.829 [2024-07-13 15:45:18.358100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.358126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.358265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.358290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.358474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.358500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.358690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.358714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.358879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.358904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.359063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.359088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.359250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.359438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.359463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.359622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.359647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.359803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.359831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.360027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.360052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.360193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.360218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.360402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.360427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.360567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.360593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.360732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.360758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.360943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.360969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.361131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.361156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.361291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.361315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.361476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.361500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.361659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.361683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.361843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.361872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.362031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.362056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.362191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.362217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.362356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.362380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.362512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.362536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.362674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.362699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.362858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.362888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.363047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.363074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.363210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.363235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.363368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.363392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.363551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.363575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.363735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.363759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.363890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.363915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.364101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.364292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.364469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.364493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.364657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.364681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.364859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.364897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.365063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.365088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.365221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.365246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.365402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.830 [2024-07-13 15:45:18.365427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.830 qpair failed and we were unable to recover it. 00:33:47.830 [2024-07-13 15:45:18.365563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.365588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.365772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.365797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.365956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.365981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.366143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.366168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.366385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.366410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.366548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.366572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.366703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.366727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.366857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.366890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.367028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.367053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.367252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.367281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.367444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.367470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.367632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.367657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.367794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.367818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.367961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.367987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.368147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.368171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.368303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.368328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.368488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.368513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.368696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.368720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.368904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.368930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.369113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.369139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.369280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.369305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.369465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.369490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.369647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.369673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.369841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.369872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.370034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.370059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.370241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.370266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.370424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.370449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.370611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.370636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.370797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.370822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.371013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.371039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.371193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.371218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.371379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.371406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.371531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.371556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.371742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.371767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.371953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.371978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.372118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.372143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.372280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.372310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.372466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.372491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.372648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.372672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.372833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.372859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.373009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.373035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.831 qpair failed and we were unable to recover it. 00:33:47.831 [2024-07-13 15:45:18.373221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.831 [2024-07-13 15:45:18.373247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.373409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.373434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.373597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.373622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.373802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.373827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.374000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.374026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.374186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.374211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.374366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.374392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.374586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.374611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.374796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.374820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.374992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.375018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.375172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.375197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.375361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.375386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.375512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.375537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.375724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.375749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.375907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.375934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.376117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.376143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.376278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.376303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.376486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.376511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.376669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.376694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.376854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.376891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.377076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.377101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.377264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.377289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.377476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.377501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.377663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.377688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.377844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.377886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.378073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.378098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.378252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.378278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.378437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.378462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.378615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.378640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.378797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.378822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.379013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.379039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.379197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.379222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.379348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.379373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.379558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.379583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.379764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.379789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.379970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.379995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.380158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.380186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.380352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.380377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.380537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.380562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.380723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.380747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.380904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.380929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.832 qpair failed and we were unable to recover it. 00:33:47.832 [2024-07-13 15:45:18.381089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.832 [2024-07-13 15:45:18.381115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.381250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.381274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.381458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.381482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.381640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.381666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.381827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.381852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.382021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.382046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.382207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.382232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.382393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.382418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.382582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.382607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.382773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.382798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.382936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.382962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.383125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.383150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.383311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.383335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.383492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.383516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.383672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.383700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.383964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.383990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.384156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.384181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.384366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.384390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.384558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.384582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.384744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.384769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.384953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.384982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.385172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.385197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.385354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.385379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.385548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.385573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.385763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.385788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.385925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.385950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.386104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.386128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.386289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.386314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.386444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.386469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.386631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.386655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.386840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.386871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.387058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.387083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.387244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.387269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.387431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.387456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.387620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.387644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.387776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.387801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.387946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.387972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.388135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.388159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.388329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.388354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.388516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.388541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.388699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.388724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.833 qpair failed and we were unable to recover it. 00:33:47.833 [2024-07-13 15:45:18.388887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.833 [2024-07-13 15:45:18.388912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.389098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.389124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.389322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.389348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.389488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.389513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.389667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.389691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.389824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.389849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.390016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.390041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.390169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.390195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.390363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.390388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.390578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.390603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.390767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.390791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.390933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.390958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.391123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.391148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.391284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.391309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.391464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.391489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.391653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.391678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.391856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.391906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.392065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.392090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.392252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.392276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.392434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.392460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.392610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.392635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.392770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.392794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.392951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.392980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.393121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.393147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.393311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.393335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.393478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.393502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.393661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.393686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.393814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.393838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.394012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.394037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.394194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.394219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.394405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.394430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.394570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.394595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.394765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.394790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.394948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.394973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.395135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.395160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.395295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.834 [2024-07-13 15:45:18.395319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.834 qpair failed and we were unable to recover it. 00:33:47.834 [2024-07-13 15:45:18.395507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.395532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.395684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.395709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.395895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.395920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.396060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.396245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.396271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.396454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.396479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.396642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.396666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.396828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.396851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.397004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.397030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.397183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.397208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.397330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.397354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.397537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.397562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.397715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.397743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.397953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.397978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.398111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.398135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.398295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.398320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.398511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.398536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.398694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.398717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.398902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.398927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.399089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.399115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.399305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.399330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.399455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.399480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.399645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.399670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.399801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.399826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.399982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.400007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.400169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.400193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.400351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.400376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.400540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.400565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.400727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.400753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.400916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.400943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.401083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.401107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.401285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.401310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.401449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.401474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.401638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.401662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.401798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.401823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.401988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.402013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.402176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.402200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.402335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.402359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.402515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.402540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.402702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.402727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.402875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.402901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.835 qpair failed and we were unable to recover it. 00:33:47.835 [2024-07-13 15:45:18.403074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.835 [2024-07-13 15:45:18.403099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.403272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.403296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.403433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.403457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.403620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.403644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.403810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.403835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.403999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.404026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.404188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.404212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.404351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.404376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.404536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.404562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.404725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.404749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.404909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.404935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.405098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.405123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.405260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.405284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.405444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.405472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.405639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.405664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.405883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.405925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.406085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.406110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.406261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.406286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.406442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.406465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.406628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.406652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.406787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.406811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.406953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.406978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.407134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.407159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.407320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.407344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.407500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.407524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.407674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.407699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.407835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.407859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.408029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.408054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.408219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.408244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.408405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.408430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.408617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.408641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.408823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.408847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.409007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.409033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.409187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.409213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.409340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.409364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.409524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.409549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.409708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.409733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.409898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.409924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.410111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.410135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.410268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.410293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.410456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.410481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.836 [2024-07-13 15:45:18.410672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.836 [2024-07-13 15:45:18.410696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.836 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.410858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.410889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.411077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.411101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.411262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.411287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.411446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.411470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.411605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.411629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.411845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.411881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.412068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.412093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.412279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.412304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.412464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.412488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.412641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.412665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.412791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.412816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.412976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.413001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.413187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.413215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.413341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.413366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.413505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.413529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.413665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.413690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.413828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.413852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.414048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.414072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.414267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.414292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.414453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.414478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.414643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.414668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.414797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.414821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.414980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.415006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.415140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.415165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.415294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.415318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.415475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.415500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.415691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.415717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.415878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.415906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.416067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.416093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.416252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.416277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.416440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.416464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.416590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.416614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.416768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.416793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.416958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.416983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.417169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.417193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.417378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.417402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.417580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.417605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.417770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.417795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.417982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.418007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.418147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.418175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.837 qpair failed and we were unable to recover it. 00:33:47.837 [2024-07-13 15:45:18.418316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.837 [2024-07-13 15:45:18.418341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.418496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.418521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.418653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.418679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.418873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.418899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.419052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.419077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.419228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.419252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.419437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.419461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.419598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.419623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.419811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.419835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.420004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.420029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.420225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.420250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.420412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.420437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.420571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.420596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.420778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.420806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.420988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.421013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.421199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.421224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.421390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.421415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.421576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.421601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.421764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.421788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.421975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.422000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.422166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.422191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.422318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.422344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.422525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.422549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.422691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.422716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.422875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.422901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.423089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.423114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.423303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.423328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.423472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.423498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.423680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.423705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.423873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.423902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.424067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.424091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.424254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.424279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.424473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.424498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.424630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.424655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.424789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.424813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.424951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.424976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.838 qpair failed and we were unable to recover it. 00:33:47.838 [2024-07-13 15:45:18.425164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.838 [2024-07-13 15:45:18.425189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.425342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.425366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.425526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.425550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.425701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.425726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.425888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.425918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.426104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.426130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.426289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.426313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.426471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.426496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.426659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.426685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.426844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.426874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.427063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.427088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.427248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.427274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.427459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.427484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.427671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.427696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.427856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.427895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.428040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.428066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.428227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.428253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.428389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.428415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.428584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.428609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.428770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.428795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.428981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.429007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.429172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.429197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.429354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.429380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.429513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.429538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.429675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.429700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.429858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.429890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.430074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.430099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.430257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.430282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.430410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.430435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.430596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.430621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.430780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.430805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.430976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.431006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.431166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.431191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.431344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.431369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.431538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.431563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.431756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.431781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.431934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.431961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.432095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.432121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.432305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.432330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.432494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.432520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.432684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.432710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.839 [2024-07-13 15:45:18.432876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.839 [2024-07-13 15:45:18.432902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.839 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.433026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.433054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.433184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.433209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.433345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.433371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.433511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.433536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.433687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.433714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.433907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.433933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.434064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.434089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.434250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.434275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.434431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.434456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.434638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.434663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.434824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.434848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.434983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.435010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.435199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.435224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.435377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.435402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.435560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.435587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.435751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.435776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.435934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.435960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.436155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.436181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.436319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.436345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.436479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.436504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.436662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.436688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.436851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.436895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.437052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.437077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.437202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.437227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.437357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.437537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.437562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.437719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.437744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.437913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.437939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.438098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.438123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.438307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.438332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.438469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.438499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.438653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.438679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.438856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.438890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.439046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.439072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.439235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.439260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.439387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.439413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.439572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.439597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.439731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.439757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.439890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.439916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.440074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.440098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.840 [2024-07-13 15:45:18.440255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.840 [2024-07-13 15:45:18.440279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.840 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.440446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.440472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.440611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.440637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.440825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.440850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.441025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.441049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.441208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.441233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.441393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.441418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.441606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.441631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.441816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.441841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.442007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.442032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.442218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.442243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.442406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.442431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.442567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.442591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.442749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.442773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.442936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.442962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.443118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.443142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.443300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.443325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.443455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.443484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.443643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.443667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.443832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.443856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.444032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.444058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.444185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.444210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.444345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.444369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.444525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.444549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.444712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.444742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.444926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.444951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.445114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.445139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.445267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.445292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.445457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.445480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.445616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.445641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.445800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.445825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.446021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.446047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.446183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.446207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.446367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.446392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.446558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.446583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.446715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.446739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.446924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.446949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.447108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.447133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.447300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.447325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.447477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.447501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.447681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.447706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.841 [2024-07-13 15:45:18.447872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.841 [2024-07-13 15:45:18.447903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.841 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.448090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.448115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.448280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.448304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.448466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.448491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.448681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.448707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.448889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.448915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.449079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.449105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.449298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.449322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.449491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.449518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.449660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.449685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.449879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.449921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.450085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.450109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.450250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.450275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.450470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.450495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.450650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.450675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.450800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.450824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.450995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.451022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.451183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.451213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.451400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.451425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.451561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.451587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.451750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.451775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.451961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.451988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.452115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.452139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.452309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.452333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.452475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.452500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.452654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.452678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.452863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.452905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.453066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.453091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.453255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.453280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.453440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.453464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.453627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.453652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.453816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.453841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.454014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.454040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.454227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.454250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.454382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.454407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.454570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.454595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.842 [2024-07-13 15:45:18.454733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.842 [2024-07-13 15:45:18.454757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.842 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.454919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.454945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.455080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.455105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.455265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.455290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.455443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.455467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.455629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.455654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.455785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.455809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.455973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.455999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.456130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.456154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.456289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.456314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.456502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.456526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.456685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.456712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.456897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.456921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.457080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.457106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.457247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.457272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.457430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.457454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.457642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.457667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.457854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.457885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.458045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.458069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.458203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.458228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.458413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.458438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.458610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.458634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.458831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.458856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.459028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.459054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.459187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.459212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.459371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.459396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.459533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.459559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.459746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.459772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.459915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.459954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.460125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.460149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.460334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.460359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.460542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.460567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.460751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.460776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.460938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.460962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.461119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.461143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.461299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.461323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.461519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.461544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.461728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.461755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.461967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.461992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.462130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.462155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.462314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.462339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.843 [2024-07-13 15:45:18.462523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.843 [2024-07-13 15:45:18.462547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.843 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.462685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.462709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.462846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.462877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.463044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.463069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.463228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.463253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.463390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.463415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.463576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.463601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.463736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.463761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.463922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.463953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.464085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.464109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.464294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.464318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.464479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.464503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.464700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.464725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.464884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.464909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.465093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.465117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.465275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.465300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.465452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.465477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.465662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.465686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.465872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.465898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.466061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.466086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.466274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.466299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.466482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.466507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.466672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.466697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.466878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.466920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.467083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.467108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.467268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.467292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.467454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.467478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.467640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.467665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.467827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.467852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.468027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.468053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.468238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.468262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.468444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.468469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.468638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.468663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.468831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.468855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.469035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.469060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.469192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.469218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.469386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.469411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.469550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.469574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.469735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.469759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.469945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.469971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.470160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.470185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.470342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.844 [2024-07-13 15:45:18.470367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.844 qpair failed and we were unable to recover it. 00:33:47.844 [2024-07-13 15:45:18.470559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.470583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.470750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.470774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.470973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.470999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.471190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.471215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.471354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.471379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.471530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.471556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.471688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.471712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.471902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.471933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.472100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.472125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.472279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.472304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.472456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.472480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.472620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.472644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.472784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.472809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.472993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.473019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.473153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.473178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.473345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.473371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.473534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.473559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.473716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.473744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.473937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.473963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.474150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.474175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.474299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.474324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.474516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.474541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.474730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.474755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.474920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.474946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.475102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.475127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.475276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.475301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.475428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.475453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.475638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.475664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.475827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.475852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.476028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.476054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.476187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.476212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.476395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.476419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.476579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.476605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.476793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.476818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.476967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.476997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.477138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.477164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.477324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.477349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.477485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.477509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.477673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.477698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.477829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.477854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.478052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.845 [2024-07-13 15:45:18.478077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.845 qpair failed and we were unable to recover it. 00:33:47.845 [2024-07-13 15:45:18.478270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.478295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.478457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.478483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.478670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.478695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.478881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.478908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.479065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.479090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.479215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.479240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.479404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.479429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.479569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.479594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.479758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.479783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.479914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.479941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.480108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.480133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.480265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.480292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.480446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.480472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.480657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.480682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.480820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.480845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.481015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.481041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.481167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.481192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.481325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.481349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.481487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.481511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.481643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.481667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.481819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.481844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.482019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.482045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.482205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.482230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.482359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.482384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.482540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.482564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.482700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.482724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.482890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.482917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.483052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.483077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.483206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.483232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.483418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.483443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.483610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.483635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.483762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.483787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.483948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.483975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.484129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.484153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.484336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.484368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.484567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.484592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.484728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.484753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.484924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.484949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.846 qpair failed and we were unable to recover it. 00:33:47.846 [2024-07-13 15:45:18.485138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.846 [2024-07-13 15:45:18.485162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.485297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.485322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.485484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.485509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.485673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.485697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.485855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.485888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.486026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.486051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.486211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.486236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.486429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.486453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.486612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.486637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.486793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.486818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.486966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.486992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.487127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.487151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.487304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.487328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.487487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.487512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.487676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.487701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.487858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.487899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.488069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.488094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.488253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.488278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.488462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.488487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.488642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.488666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.488831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.488856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.489043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.489068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.489229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.489254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.489438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.489466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.489604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.489628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.489786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.489812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.489974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.489999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.490155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.490179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.490313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.490339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.490499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.490524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.490662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.490686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.490836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.490861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.491004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.491027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.491184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.491209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.491370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.491396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.491555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.491579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.491743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.491768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.491932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.491959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.847 qpair failed and we were unable to recover it. 00:33:47.847 [2024-07-13 15:45:18.492097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.847 [2024-07-13 15:45:18.492121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.492250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.492274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.492428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.492453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.492615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.492639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.492797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.492822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.492955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.492981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.493169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.493193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.493355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.493379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.493564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.493589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.493750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.493776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.493946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.493972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.494112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.494137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.494305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.494330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.494500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.494525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.494659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.494683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.494842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.494872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.495034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.495059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.495198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.495224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.495364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.495389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.495529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.495555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.495727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.495751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.495888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.495915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.496053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.496078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.496248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.496273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.496458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.496482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.496642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.496667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.496789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.496816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.496975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.497001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.497186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.497364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.497389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.497545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.497569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.497713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.497739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.497987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.498013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.498179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.498203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.498372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.498396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.498585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.498610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.498773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.498797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.498986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.499012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.499175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.499200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.499338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.499361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.499560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.499585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.848 [2024-07-13 15:45:18.499746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.848 [2024-07-13 15:45:18.499771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.848 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.499906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.499932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.500116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.500141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.500299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.500324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.500483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.500508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.500668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.500692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.500825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.500849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.501040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.501065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.501198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.501222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.501388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.501412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.501579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.501604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.501728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.501752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.501939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.501968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.502132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.502157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.502317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.502342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.502525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.502549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.502733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.502757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.502942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.502967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.503125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.503150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.503312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.503336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.503464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.503488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.503676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.503700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.503834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.503858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.504033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.504058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.504193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.504347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.504372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.504553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.504594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.504791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.504821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.505048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.505075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.505261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.505287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.505473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.505498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.505658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.505684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.505878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.505904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.506056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.506080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.506245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.506270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.506406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.506430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.506618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.506642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.506828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.506853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.507043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.507083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.507244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.507271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.507468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.507494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.849 qpair failed and we were unable to recover it. 00:33:47.849 [2024-07-13 15:45:18.507650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.849 [2024-07-13 15:45:18.507675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.507808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.507836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.508004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.508030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.508173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.508199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.508364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.508390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.508577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.508603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.508766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.508792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.508981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.509008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.509147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.509174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.509362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.509388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.509519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.509545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.509708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.509734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.509902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.509929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.510097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.510123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.510251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.510276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.510409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.510435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.510574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.510600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.510730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.510757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.510922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.510950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.511110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.511135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.511268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.511295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.511457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.511483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.511638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.511664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.511796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.511822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.512035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.512076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.512243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.512274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.512413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.512438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.512594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.512619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.512808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.512836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.513061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.513088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.513229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.513255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.513413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.513439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.513598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.513623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.513781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.513806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.513974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.513999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.514167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.514192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.514359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.514384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.514535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.514559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.514696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.514721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.514889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.514915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.515072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.850 [2024-07-13 15:45:18.515097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.850 qpair failed and we were unable to recover it. 00:33:47.850 [2024-07-13 15:45:18.515261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.515285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.515416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.515442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.515576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.515601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.515733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.515758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.515943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.515971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.516160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.516185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.516345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.516369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.516555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.516579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.516768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.516796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.516947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.516973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.517114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.517141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.517331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.517359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.517547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.517573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.517720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.517745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.517904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.517931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.518094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.518119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.518285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.518310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.518505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.518530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.518685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.518709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.518876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.518901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.519061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.519086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.519221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.519245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.519402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.519427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.519607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.519632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.519790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.519816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.519971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.520011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.520210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.520237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.520410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.520436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.520597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.520622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.520782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.520808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.520970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.520997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.521136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.521163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.521301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.521326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.521519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.521545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.521702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.521728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.851 [2024-07-13 15:45:18.521894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.851 [2024-07-13 15:45:18.521922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.851 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.522088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.522114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.522282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.522308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.522449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.522481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.522645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.522671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.522834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.522864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.523068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.523094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.523292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.523318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.523481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.523509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.523670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.523714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.523878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.523905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.524066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.524093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.524286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.524312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.524504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.524530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.524690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.524717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.524880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.524907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.525066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.525092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.525262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.525288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.525426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.525452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.525613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.525639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.525828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.525853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.526017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.526043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.526176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.526201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.526344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.526370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.526511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.526538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.526702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.526730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.526873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.526900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.527069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.527096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.527257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.527283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.527476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.527503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.527668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.527695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.527855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.527891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.528058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.528085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.528247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.528272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.528431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.528622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.528648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.528811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.528837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.529013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.529040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.529168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.529195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.529384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.529410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.529569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.529596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.852 qpair failed and we were unable to recover it. 00:33:47.852 [2024-07-13 15:45:18.529760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.852 [2024-07-13 15:45:18.529787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.529982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.530009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.530145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.530177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.530349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.530375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.530541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.530567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.530739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.530765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.530930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.530957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.531123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.531157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.531316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.531342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.531511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.531537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.531699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.531726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.531905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.531931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.532062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.532088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.532287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.532313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.532448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.532474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.532614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.532642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.532789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.532815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.532970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.532996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.533190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.533216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.533406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.533432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.533619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.533645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.533806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.533832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.533987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.534014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.534180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.534206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.534365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.534391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.534552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.534578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.534745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.534774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.534971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.534997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.535158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.535184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.535375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.535424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.535643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.535682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.535914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.535943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.536123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.536156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.536348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.536374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.536530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.536554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.536712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.536738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.536927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.536953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.537086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.537111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.537275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.537301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.537468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.537492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.853 [2024-07-13 15:45:18.537656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.853 [2024-07-13 15:45:18.537683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.853 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.537848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.537888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.538034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.538064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.538223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.538249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.538437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.538462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.538622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.538647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.538783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.538807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.538947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.538973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.539126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.539151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.539290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.539317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.539475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.539500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.539664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.539690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.539853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.539885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.540076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.540101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.540243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.540270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.540396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.540422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.540585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.540610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.540767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.540792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.540938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.540964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.541091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.541117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.541272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.541297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.541459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.541484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.541641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.541666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.541826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.541853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.542002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.542027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.542188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.542215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.542406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.542431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.542564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.542591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.542791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.542831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.543032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.543060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.543233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.543259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.543444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.543470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.543636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.543662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.543827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.543853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.544031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.544057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.544190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.544216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.544385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.544412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.544575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.544603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.544797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.544822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.544995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.545021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.545188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.545213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.854 qpair failed and we were unable to recover it. 00:33:47.854 [2024-07-13 15:45:18.545374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.854 [2024-07-13 15:45:18.545399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.855 qpair failed and we were unable to recover it. 00:33:47.855 [2024-07-13 15:45:18.545534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.855 [2024-07-13 15:45:18.545564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.855 qpair failed and we were unable to recover it. 00:33:47.855 [2024-07-13 15:45:18.545728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.855 [2024-07-13 15:45:18.545754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.855 qpair failed and we were unable to recover it. 00:33:47.855 [2024-07-13 15:45:18.545916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.855 [2024-07-13 15:45:18.545942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.855 qpair failed and we were unable to recover it. 00:33:47.855 [2024-07-13 15:45:18.546105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.855 [2024-07-13 15:45:18.546130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:47.855 qpair failed and we were unable to recover it. 00:33:47.855 [2024-07-13 15:45:18.546270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.855 [2024-07-13 15:45:18.546296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.546484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.546511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.546672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.546697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.546856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.546891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.547082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.547108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.547273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.547298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.547462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.547487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.547674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.547699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.547858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.547894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.548032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.548057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.548217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.548242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.548389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.548415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.548600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.548625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.548787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.548812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.548962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.548987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.549119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.549144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.549327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.549352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.549538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.549563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.549698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.549723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.549870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.549896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.550051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.550076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.550243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.550268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.550398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.550424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.550611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.550640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.550836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.550861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.551008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.551034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.551217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.551243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.551402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.551427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.551584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.551609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.140 qpair failed and we were unable to recover it. 00:33:48.140 [2024-07-13 15:45:18.551744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.140 [2024-07-13 15:45:18.551769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.551928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.551955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.552120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.552302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.552327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.552486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.552513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.552671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.552696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.552889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.552924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.553061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.553086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.553235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.553260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.553415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.553440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.553605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.553630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.553813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.553838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.554006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.554032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.554168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.554193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.554349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.554374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.554535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.554560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.554742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.554767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.554904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.554930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.555095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.555120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.555279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.555303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.555460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.555485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.555649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.555674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.555849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.555881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.556060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.556085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.556255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.556279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.556437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.556462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.556647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.556672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.556812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.556837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.557014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.557039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.557208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.557233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.557391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.557416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.557550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.557575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.557729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.557754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.557910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.557936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.558098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.558133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.558298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.558323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.558477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.558502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.558650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.558675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.558834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.558860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.559022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.559047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.141 qpair failed and we were unable to recover it. 00:33:48.141 [2024-07-13 15:45:18.559206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.141 [2024-07-13 15:45:18.559231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.559414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.559439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.559604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.559629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.559813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.559838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.560005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.560031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.560192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.560216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.560401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.560426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.560611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.560636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.560805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.560830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.561031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.561057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.561216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.561240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.561380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.561405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.561571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.561596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.561785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.561810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.561968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.561994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.562158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.562182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.562315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.562339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.562521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.562545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.562736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.562760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.562929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.562956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.563118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.563142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.563335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.563359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.563500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.563525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.563680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.563705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.563872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.563909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.564070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.564095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.564224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.564249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.564382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.564406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.564600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.564624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.564787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.564811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.564997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.565023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.565160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.565184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.565372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.565397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.565553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.565580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.565742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.565771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.565959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.565985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.566121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.566146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.566312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.566337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.566501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.566526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.566665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.566689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.566830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.142 [2024-07-13 15:45:18.566855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.142 qpair failed and we were unable to recover it. 00:33:48.142 [2024-07-13 15:45:18.567039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.567065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.567226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.567250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.567384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.567409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.567570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.567595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.567750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.567775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.567933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.567958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.568096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.568121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.568288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.568314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.568454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.568478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.568663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.568689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.568855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.568890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.569037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.569061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.569193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.569219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.569384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.569409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.569538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.569561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.569721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.569747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.569911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.569937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.570125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.570150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.570311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.570335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.570523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.570548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.570717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.570742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.570931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.570957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.571144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.571169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.571332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.571356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.571494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.571518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.571706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.571731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.571872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.571898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.572065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.572089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.572273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.572298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.572485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.572510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.572666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.572691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.572849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.572882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.573023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.573049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.573232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.573261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.573397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.573421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.573583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.573608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.573808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.573833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.573977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.574003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.574163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.574189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.143 [2024-07-13 15:45:18.574356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.143 [2024-07-13 15:45:18.574383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.143 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.574555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.574580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.574722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.574746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.574933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.574959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.575093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.575118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.575291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.575316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.575477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.575637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.575661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.575862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.575896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.576055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.576080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.576263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.576287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.576450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.576476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.576637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.576662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.576820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.576844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.577040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.577065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.577213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.577237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.577435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.577460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.577603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.577630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.577770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.577796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.577970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.577996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.578181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.578207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.578353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.578377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.578540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.578565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.578754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.578780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.578970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.578995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.579121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.579147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.579312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.579336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.579496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.579521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.579650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.579675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.579812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.579836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.579999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.580023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.580211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.580236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.580392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.580417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.580577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.144 [2024-07-13 15:45:18.580601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.144 qpair failed and we were unable to recover it. 00:33:48.144 [2024-07-13 15:45:18.580781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.580810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.581003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.581028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.581216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.581241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.581427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.581452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.581611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.581635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.581778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.581803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.581970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.581996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.582183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.582208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.582367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.582392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.582578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.582603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.582764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.582789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.582985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.583011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.583169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.583194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.583353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.583377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.583533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.583558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.583721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.583746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.583937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.583962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.584107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.584131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.584267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.584294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.584460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.584484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.584647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.584672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.584837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.584862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.585031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.585057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.585217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.585242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.585370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.585394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.585555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.585580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.585783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.585811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.586005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.586030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.586190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.586214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.586401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.586426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.586612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.586638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.586797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.586822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.587012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.587037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.587209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.587234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.587400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.587425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.587617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.587641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.587771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.587796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.587948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.587974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.588113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.588137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.588298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.588323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-13 15:45:18.588477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.145 [2024-07-13 15:45:18.588506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.145 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.588644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.588669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.588832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.588857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.589022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.589046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.589235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.589259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.589420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.589445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.589609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.589633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.589791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.589815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.589951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.589976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.590161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.590186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.590350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.590376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.590545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.590570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.590753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.590778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.590944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.590970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.591136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.591161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.591348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.591373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.591535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.591559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.591757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.591782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.591945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.591971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.592127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.592152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.592281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.592305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.592468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.592493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.592653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.592678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.592870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.592896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.593064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.593088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.593253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.593278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.593437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.593461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.593652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.593677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.593877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.593903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.594093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.594117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.594249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.594274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.594426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.594451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.594609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.594633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.594795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.594820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.595007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.595032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.595177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.595201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.595362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.595386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.595546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.595573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.595732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.595758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.595887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.595912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.596074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.146 [2024-07-13 15:45:18.596103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-13 15:45:18.596260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.596285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.596447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.596472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.596610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.596634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.596816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.596841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.596986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.597012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.597147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.597171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.597329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.597354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.597539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.597564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.597726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.597750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.597906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.597931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.598086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.598111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.598274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.598299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.598483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.598507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.598673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.598699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.598902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.598928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.599095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.599120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.599275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.599300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.599464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.599488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.599647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.599672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.599828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.599853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.600021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.600047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.600184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.600208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.600369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.600393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.600551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.600576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.600758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.600783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.600913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.600938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.601103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.601128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.601291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.601316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.601484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.601508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.601696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.601721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.601906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.601932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.602088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.602113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.602275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.602299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.602461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.602485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.602645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.602671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.602831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.602856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.603003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.603027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.603194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.603219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.603374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.603399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.603561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.147 [2024-07-13 15:45:18.603589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-13 15:45:18.603776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.603802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.603983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.604009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.604152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.604177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.604371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.604395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.604529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.604554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.604691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.604717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.604881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.604906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.605073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.605098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.605233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.605258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.605393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.605417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.605603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.605628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.605813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.605838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.606008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.606034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.606225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.606250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.606447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.606472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.606607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.606632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.606797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.606822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.606974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.606999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.607136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.607162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.607335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.607361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.607519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.607545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.607709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.607735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.607874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.607900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.608070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.608096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.608226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.608252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.608400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.608425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.608616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.608641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.608828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.608853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.609003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.609027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.609163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.609189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.609349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.609374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.609534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.609559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.609745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.609770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.609909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.609935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.610066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.610091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.148 qpair failed and we were unable to recover it. 00:33:48.148 [2024-07-13 15:45:18.610249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.148 [2024-07-13 15:45:18.610273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.610438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.610463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.610619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.610643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.610806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.610830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.611003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.611033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.611220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.611245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.611442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.611467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.611630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.611655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.611814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.611838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.612008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.612033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.612228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.612254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.612440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.612466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.612605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.612631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.612794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.612819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.613004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.613029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.613189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.613214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.613378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.613402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.613601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.613626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.613813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.613838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.614026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.614052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.614200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.614224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.614412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.614436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.614598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.614623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.614777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.614802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.614962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.614987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.615113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.615137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.615274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.615299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.615460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.615484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.615647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.615671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.615884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.615926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.616111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.616136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.616303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.616328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.616484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.616509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.616700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.616725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.616886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.616912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.617075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.617100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.617242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.617266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.617429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.617453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.617636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.617661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.617825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.617850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.149 [2024-07-13 15:45:18.618016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.149 [2024-07-13 15:45:18.618041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.149 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.618199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.618226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.618386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.618411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.618563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.618588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.618746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.618777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.618936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.618962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.619152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.619177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.619331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.619356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.619513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.619538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.619702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.619727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.619913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.619939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.620105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.620129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.620265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.620291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.620452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.620477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.620640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.620665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.620822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.620846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.621018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.621044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.621184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.621210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.621398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.621423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.621585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.621610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.621794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.621819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.621953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.621979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.622108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.622132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.622292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.622317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.622481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.622506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.622667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.622690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.622878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.622903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.623060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.623085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.623217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.623242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.623401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.623428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.623588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.623613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.623769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.623794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.623958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.623984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.624145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.624171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.624332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.624357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.624519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.624545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.624708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.624733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.624877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.624903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.625061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.625086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.625260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.625285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.625444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.150 [2024-07-13 15:45:18.625469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.150 qpair failed and we were unable to recover it. 00:33:48.150 [2024-07-13 15:45:18.625627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.625652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.625816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.625841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.626032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.626057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.626220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.626249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.626379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.626405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.626541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.626564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.626695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.626720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.626909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.626935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.627068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.627095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.627261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.627286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.627466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.627653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.627678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.627813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.627838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.628002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.628028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.628187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.628212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.628377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.628402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.628565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.628590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.628768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.628811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.628997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.629023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.629180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.629207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.629366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.629391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.629583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.629608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.629767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.629792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.629951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.629977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.630133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.630158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.630346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.630371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.630562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.630588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.630751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.630776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.630930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.630956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.631118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.631143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.631335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.631361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.631520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.631546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.631698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.631723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.631879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.631905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.632036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.632062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.632222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.632247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.632409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.632434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.632587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.632612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.632808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.632833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.633022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.633048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.151 [2024-07-13 15:45:18.633175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.151 [2024-07-13 15:45:18.633200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.151 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.633330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.633355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.633515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.633540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.633702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.633731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.633879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.633906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.634069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.634094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.634278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.634304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.634488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.634513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.634708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.634733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.634901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.634927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.635092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.635117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.635277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.635303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.635491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.635516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.635677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.635702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.635889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.635915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.636077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.636102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.636289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.636314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.636474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.636498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.636633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.636657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.636822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.636847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.636989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.637014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.637207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.637232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.637421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.637446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.637606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.637631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.637793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.637818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.637982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.638007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.638170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.638195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.638354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.638381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.638575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.638600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.638735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.638760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.638931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.638959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.639120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.639146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.639283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.639308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.639467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.639493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.639655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.639680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.639805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.639830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.639975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.640001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.640162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.640187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.640351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.640375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.640507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.640531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.640690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.152 [2024-07-13 15:45:18.640715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.152 qpair failed and we were unable to recover it. 00:33:48.152 [2024-07-13 15:45:18.640877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.640910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.641045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.641069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.641226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.641255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.641420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.641445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.641610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.641634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.641827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.641852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.642021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.642046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.642208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.642234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.642417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.642442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.642583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.642607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.642763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.642789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.642951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.642976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.643101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.643125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.643285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.643310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.643470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.643495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.643676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.643700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.643871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.643897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.644025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.644050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.644240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.644264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.644404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.644429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.644595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.644620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.644810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.644834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.645003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.645028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.645193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.645218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.645384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.645409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.645566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.645591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.645757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.645782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.645975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.646000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.646163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.646188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.646325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.646353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.646531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.646555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.646718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.646742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.646924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.646952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.647090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.647116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.153 [2024-07-13 15:45:18.647279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.153 [2024-07-13 15:45:18.647304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.153 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.647465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.647489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.647643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.647668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.647829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.647854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.647997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.648022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.648155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.648179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.648365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.648391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.648552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.648576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.648736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.648760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.648905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.648932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.649118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.649143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.649325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.649349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.649509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.649533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.649690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.649714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.649900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.649926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.650087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.650112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.650306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.650331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.650495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.650519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.650684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.650710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.650872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.650897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.651062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.651087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.651249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.651273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.651411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.651435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.651591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.651617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.651778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.651802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.651935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.651959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.652096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.652121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.652278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.652304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.652466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.652492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.652655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.652680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.652844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.652877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.653036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.653061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.653215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.653239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.653390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.653414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.653536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.653561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.653750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.653788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.653922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.653948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.654131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.654156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.654316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.654342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.654477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.654502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.654689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.154 [2024-07-13 15:45:18.654713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.154 qpair failed and we were unable to recover it. 00:33:48.154 [2024-07-13 15:45:18.654889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.654918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.655128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.655153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.655289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.655314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.655473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.655499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.655658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.655683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.655852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.655885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.656030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.656056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.656250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.656275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.656464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.656488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.656688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.656713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.656847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.656880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.657043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.657070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.657232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.657256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.657421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.657446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.657631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.657656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.657840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.657871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.658003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.658027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.658214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.658238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.658402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.658428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.658562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.658587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.658750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.658775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.658966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.658992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.659125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.659150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.659332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.659358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.659493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.659517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.659649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.659674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.659856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.659891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.660065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.660089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.660249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.660275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.660399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.660423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.660550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.660573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.660730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.660755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.660923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.660949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.661104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.661129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.661262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.661292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.661450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.661475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.661639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.661664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.661849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.661881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.662044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.662070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.662228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.662254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.155 [2024-07-13 15:45:18.662403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.155 [2024-07-13 15:45:18.662427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.155 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.662613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.662638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.662780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.662807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.662967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.662993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.663177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.663202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.663381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.663405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.663600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.663625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.663789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.663815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.663952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.663978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.664138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.664162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.664357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.664381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.664543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.664568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.664726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.664751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.664916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.664940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.665079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.665104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.665265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.665292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.665462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.665486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.665674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.665699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.665840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.665872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.666015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.666040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.666200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.666224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.666366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.666392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.666529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.666553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.666708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.666733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.666889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.666915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.667071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.667096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.667231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.667256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.667414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.667439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.667598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.667624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.667789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.667814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.668011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.668037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.668191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.668216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.668360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.668385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.668545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.668571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.668731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.668762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.668919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.668945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.669102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.669127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.669287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.669311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.669508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.669533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.669658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.669683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.669833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.156 [2024-07-13 15:45:18.669859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.156 qpair failed and we were unable to recover it. 00:33:48.156 [2024-07-13 15:45:18.670051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.670077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.670212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.670237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.670386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.670411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.670546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.670573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.670760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.670786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.670948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.670974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.671137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.671162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.671297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.671323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.671456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.671482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.671638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.671663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.671823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.671848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.672018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.672044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.672176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.672200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.672385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.672410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.672570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.672595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.672779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.672804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.672963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.672989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.673154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.673179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.673362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.673387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.673513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.673538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.673687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.673848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.673879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.674078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.674103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.674292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.674317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.674478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.674503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.674664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.674690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.674825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.674849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.675021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.675047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.675206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.675231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.675411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.675436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.675599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.675624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.675764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.675790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.675955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.675980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.676118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.676147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.676315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.157 [2024-07-13 15:45:18.676340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.157 qpair failed and we were unable to recover it. 00:33:48.157 [2024-07-13 15:45:18.676493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.676518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.676677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.676703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.676863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.676894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.677054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.677080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.677240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.677266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.677435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.677462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.677587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.677613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.677777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.677803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.677989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.678015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.678182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.678208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.678369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.678395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.678581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.678605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.678774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.678798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.678982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.679007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.679166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.679191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.679381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.679406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.679544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.679569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.679697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.679721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.679905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.679930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.680096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.680120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.680288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.680312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.680493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.680518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.680658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.680683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.680837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.680861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.681035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.681060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.681219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.681244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.681405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.681429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.681562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.681586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.681738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.681763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.681929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.681955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.682119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.682143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.682295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.682321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.682459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.682485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.682671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.682696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.682884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.682909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.683098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.683123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.683263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.683290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.683476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.683501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.683690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.683718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.683900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.683926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.158 [2024-07-13 15:45:18.684113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.158 [2024-07-13 15:45:18.684139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.158 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.684299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.684324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.684483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.684507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.684670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.684696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.684852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.684884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.685071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.685095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.685253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.685279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.685465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.685490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.685653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.685678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.685879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.685904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.686057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.686081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.686247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.686273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.686437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.686462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.686611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.686635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.686820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.686844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.687022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.687048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.687184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.687209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.687369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.687394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.687582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.687607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.687765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.687790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.687950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.687975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.688136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.688160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.688314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.688339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.688495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.688519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.688661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.688686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.688826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.688851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.688998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.689021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.689185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.689209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.689394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.689419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.689583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.689608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.689781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.689807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.689944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.689969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.690157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.690182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.690361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.690387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.690546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.690572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.690785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.690813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.691023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.691048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.691183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.691207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.691364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.691394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.691559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.691584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.691722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.159 [2024-07-13 15:45:18.691748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.159 qpair failed and we were unable to recover it. 00:33:48.159 [2024-07-13 15:45:18.691915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.691941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.692103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.692129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.692292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.692316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.692448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.692473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.692636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.692661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.692821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.692845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.693014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.693039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.693200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.693225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.693385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.693411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.693578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.693603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.693760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.693786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.693928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.693954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.694141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.694167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.694303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.694327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.694487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.694514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.694673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.694699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.694862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.694905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.695095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.695120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.695279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.695303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.695465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.695491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.695648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.695673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.695859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.695909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.696097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.696122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.696281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.696305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.696471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.696496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.696661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.696687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.696879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.696905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.697041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.697066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.697229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.697254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.697412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.697437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.697600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.697625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.697785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.697964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.697989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.698152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.698179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.698367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.698392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.698552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.698577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.698741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.698767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.698900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.698930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.699091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.699116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.699299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.699323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.699485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.160 [2024-07-13 15:45:18.699510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.160 qpair failed and we were unable to recover it. 00:33:48.160 [2024-07-13 15:45:18.699670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.699695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.699820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.699844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.700004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.700029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.700190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.700215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.700370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.700394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.700557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.700582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.700775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.700800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.700957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.700983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.701120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.701145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.701304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.701328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.701498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.701523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.701685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.701711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.701877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.701902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.702061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.702086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.702268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.702293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.702450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.702475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.702657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.702681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.702841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.702881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.703043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.703069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.703257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.703282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.703439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.703464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.703648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.703673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.703832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.703857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.704035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.704060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.704198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.704221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.704382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.704406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.704592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.704617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.704777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.704802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.704938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.704963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.705093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.705118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.705306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.705332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.705516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.705540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.705683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.705707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.705848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.705882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.706059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.706084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.161 qpair failed and we were unable to recover it. 00:33:48.161 [2024-07-13 15:45:18.706216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.161 [2024-07-13 15:45:18.706240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.706410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.706442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.706602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.706628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.706786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.706811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.707000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.707025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.707187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.707212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.707350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.707374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.707560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.707585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.707754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.707782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.707957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.707983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.708173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.708198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.708335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.708359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.708523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.708548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.708740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.708766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.708938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.708964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.709127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.709151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.709318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.709343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.709507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.709533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.709718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.709743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.709902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.709927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.710091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.710116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.710257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.710283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.710471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.710496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.710659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.710684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.710891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.710917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.711059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.711084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.711224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.711250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.711410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.711435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.711602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.711628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.711764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.711791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.711958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.711984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.712146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.712171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.712302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.712328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.712524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.712550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.712740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.712765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.712950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.712975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.713116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.713141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.713308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.713333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.713475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.713693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.713719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.713856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.162 [2024-07-13 15:45:18.713890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.162 qpair failed and we were unable to recover it. 00:33:48.162 [2024-07-13 15:45:18.714075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.714105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.714288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.714313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.714471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.714495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.714658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.714683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.714846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.714880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.715044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.715071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.715260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.715285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.715421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.715446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.715611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.715637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.715801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.715827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.716016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.716042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.716203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.716229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.716385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.716410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.716570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.716596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.716774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.716799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.716958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.716984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.717150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.717176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.717341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.717367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.717501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.717527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.717659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.717684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.717821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.717846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.718038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.718063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.718225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.718251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.718390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.718415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.718579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.718604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.718792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.718818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.718981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.719008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.719184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.719209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.719371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.719396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.719537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.719563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.719725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.719750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.719916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.719942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.720078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.720104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.720292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.720317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.720440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.720465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.720630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.720655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.720818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.720844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.721036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.721062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.721205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.721230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.163 [2024-07-13 15:45:18.721418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.163 [2024-07-13 15:45:18.721443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.163 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.721609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.721638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.721802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.721827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.721995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.722022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.722212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.722237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.722373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.722398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.722523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.722547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.722711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.722736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.722893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.722919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.723106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.723131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.723289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.723314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.723470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.723497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.723685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.723710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.723874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.723899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.724039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.724063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.724236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.724261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.724423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.724448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.724594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.724617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.724803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.724830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.725001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.725028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.725182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.725207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.725398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.725423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.725584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.725609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.725773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.725798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.725985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.726011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.726196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.726221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.726383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.726407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.726546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.726572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.726751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.726792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.726989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.727017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.727176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.727202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.727336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.727362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.727490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.727515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.727682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.727708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.727872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.727900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.728041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.728067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.728225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.728251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.728414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.728442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.728629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.728654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.728787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.728813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.728971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.164 [2024-07-13 15:45:18.728997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.164 qpair failed and we were unable to recover it. 00:33:48.164 [2024-07-13 15:45:18.729183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.729214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.729374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.729400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.729537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.729564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.729750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.729777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.729933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.729959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.730118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.730144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.730309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.730337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.730521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.730547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.730711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.730736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.730899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.730924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.731092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.731119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.731271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.731298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.731495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.731521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.731682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.731707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.731879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.731906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.732068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.732095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.732282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.732308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.732498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.732525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.732688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.732714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.732874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.732900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.733087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.733113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.733275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.733301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.733472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.733498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.733668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.733696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.733860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.733894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.734058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.734084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.734247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.734439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.734464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.734647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.734673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.734831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.734856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.735044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.735205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.735229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.735390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.735415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.735579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.735605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.735791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.735816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.736030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.736055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.736238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.736262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.736431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.736457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.736652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.736677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.736814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.165 [2024-07-13 15:45:18.736839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.165 qpair failed and we were unable to recover it. 00:33:48.165 [2024-07-13 15:45:18.737006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.737035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.737197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.737222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.737378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.737404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.737565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.737589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.737751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.737776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.737961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.737987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.738147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.738171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.738329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.738353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.738540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.738565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.738755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.738782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.738990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.739015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.739180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.739204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.739367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.739392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.739555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.739580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.739745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.739770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.739929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.739954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.740093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.740118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.740304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.740329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.740481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.740506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.740689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.740713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.740886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.740911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.741064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.741090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.741249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.741274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.741410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.741436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.741601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.741626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.741786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.741811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.741966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.741992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.742179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.742208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.742393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.742418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.742581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.742605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.742765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.742790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.742926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.742952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.743118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.743143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.743305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.743330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.743478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.743503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.166 [2024-07-13 15:45:18.743685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.166 [2024-07-13 15:45:18.743710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.166 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.743843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.743878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.744012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.744037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.744199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.744224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.744412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.744437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.744601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.744627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.744798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.744823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.744988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.745014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.745148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.745173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.745368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.745393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.745552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.745577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.745710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.745735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.745921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.745947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.746106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.746130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.746266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.746292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.746481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.746507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.746693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.746718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.746883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.746909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.747092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.747118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.747274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.747300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.747439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.747464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.747625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.747650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.747845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.747877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.748025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.748050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.748237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.748262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.748455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.748480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.748643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.748668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.748853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.748887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.749043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.749068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.749253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.749278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.749434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.749459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.749646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.749671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.749811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.749842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.750042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.750068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.750252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.750277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.750431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.750456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.750642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.750667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.750825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.750853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.751067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.751092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.751279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.751304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.751465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.167 [2024-07-13 15:45:18.751490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.167 qpair failed and we were unable to recover it. 00:33:48.167 [2024-07-13 15:45:18.751675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.751700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.751890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.751916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.752077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.752102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.752237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.752264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.752421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.752446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.752593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.752618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.752778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.752803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.752930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.752956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.753108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.753133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.753296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.753321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.753486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.753675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.753700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.753891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.753917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.754052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.754078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.754220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.754245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.754383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.754408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.754571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.754596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.754759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.754784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.754919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.754946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.755078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.755103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.755266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.755292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.755452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.755477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.755609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.755633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.755783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.755810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.755990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.756016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.756177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.756202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.756356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.756381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.756516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.756541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.756701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.756725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.756918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.756943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.757096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.757121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.757285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.757314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.757451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.757476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.757639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.757663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.757822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.757848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.758005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.758030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.758214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.758239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.758403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.758429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.758586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.758611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.758751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.758777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.168 [2024-07-13 15:45:18.758933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.168 [2024-07-13 15:45:18.758959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.168 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.759120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.759145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.759307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.759332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.759493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.759518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.759678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.759703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.759876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.759902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.760064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.760088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.760252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.760278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.760438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.760464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.760615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.760640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.760829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.760857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.761022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.761048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.761205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.761229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.761371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.761395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.761562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.761587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.761746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.761771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.761913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.761940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.762100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.762125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.762291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.762316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.762482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.762507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.762671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.762698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.762856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.763051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.763076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.763214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.763241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.763375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.763401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.763559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.763584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.763743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.763768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.763928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.763954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.764134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.764159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.764312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.764338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.764514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.764539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.764691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.764720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.764859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.764890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.765053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.765078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.765241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.765267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.765425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.765452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.765592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.765617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.765749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.765774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.765940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.765966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.766106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.766132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.169 [2024-07-13 15:45:18.766297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.169 [2024-07-13 15:45:18.766323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.169 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.766492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.766517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.766686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.766711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.766884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.766913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.767093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.767117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.767284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.767310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.767503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.767528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.767692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.767717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.767879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.767904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.768036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.768061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.768230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.768255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.768418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.768443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.768584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.768609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.768772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.768798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.768934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.768961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.769105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.769130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.769293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.769318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.769479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.769505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.769674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.769699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.769884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.769910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.770065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.770090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.770274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.770300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.770465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.770489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.770643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.770668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.770856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.770888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.771028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.771054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.771220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.771246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.771404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.771429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.771620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.771645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.771805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.771832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.772029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.772055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.772243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.772272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.772410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.772436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.772600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.772625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.772761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.772786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.772945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.170 [2024-07-13 15:45:18.772971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.170 qpair failed and we were unable to recover it. 00:33:48.170 [2024-07-13 15:45:18.773134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.773158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.773289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.773313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.773476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.773501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.773661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.773686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.773845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.773886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.774027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.774052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.774239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.774264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.774428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.774453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.774614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.774639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.774810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.774835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.774976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.775002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.775157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.775182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.775375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.775400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.775563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.775590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.775744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.775769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.775945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.775971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.776153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.776178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.776337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.776489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.776516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.776704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.776729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.776883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.776909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.777097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.777122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.777288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.777313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.777478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.777503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.777646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.777671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.777827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.777852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.778051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.778077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.778240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.778267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.778429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.778454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.778615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.778640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.778819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.778847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.779061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.779086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.779274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.779299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.779435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.779460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.779589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.779615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.779746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.779775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.779938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.779964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.780119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.780144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.780337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.780534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.780559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.171 [2024-07-13 15:45:18.780747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.171 [2024-07-13 15:45:18.780772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.171 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.780957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.780982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.781145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.781170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.781327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.781352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.781511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.781536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.781723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.781749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.781914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.781940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.782103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.782129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.782292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.782317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.782483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.782508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.782663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.782688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.782843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.782876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.783035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.783060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.783225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.783250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.783407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.783432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.783587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.783612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.783797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.783825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.784048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.784073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.784259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.784283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.784447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.784472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.784638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.784822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.784848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.785053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.785079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.785217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.785243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.785405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.785430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.785557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.785582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.785744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.785769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.785901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.785927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.786065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.786090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.786253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.786279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.786442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.786468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.786628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.786653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.786802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.786826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.786992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.787018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.787183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.787207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.787374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.787405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.787569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.787594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.787732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.787757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.787941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.787967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.788126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.788153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.788351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.172 [2024-07-13 15:45:18.788376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.172 qpair failed and we were unable to recover it. 00:33:48.172 [2024-07-13 15:45:18.788565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.788590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.788743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.788768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.788909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.788936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.789076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.789103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.789293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.789318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.789505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.789529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.789664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.789689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.789891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.789917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.790089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.790114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.790270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.790295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.790463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.790489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.790650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.790675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.790834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.790859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.791004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.791029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.791166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.791192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.791379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.791404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.791565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.791591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.791752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.791777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.791940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.791966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.792161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.792186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.792347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.792372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.792539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.792564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.792699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.792724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.792879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.792904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.793063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.793088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.793242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.793267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.793427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.793452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.793637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.793662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.793792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.793818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.794001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.794027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.794183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.794209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.794341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.794366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.794553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.794578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.794716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.794741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.794905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.794935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.795089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.795114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.795301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.795326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.795463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.795487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.795666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.795691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.795878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.795922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.173 [2024-07-13 15:45:18.796084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.173 [2024-07-13 15:45:18.796109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.173 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.796240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.796265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.796447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.796472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.796599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.796623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.796805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.796830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.796991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.797017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.797207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.797232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.797357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.797382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.797519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.797545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.797704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.797729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.797859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.797900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.798065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.798090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.798250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.798276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.798437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.798462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.798614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.798639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.798795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.798820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.798986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.799012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.799196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.799222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.799382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.799407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.799568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.799593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.799752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.799778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.799927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.799954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.800140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.800166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.800302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.800327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.800510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.800535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.800720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.800745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.800928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.800953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.801114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.801139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.801293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.801318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.801455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.801481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.801664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.801690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.801853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.801885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.802015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.802040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.802180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.802207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.174 [2024-07-13 15:45:18.802392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.174 [2024-07-13 15:45:18.802421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.174 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.802557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.802584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.802746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.802771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.802936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.803121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.803146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.803306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.803331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.803469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.803494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.803658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.803683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.803876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.803901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.804053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.804078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.804218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.804243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.804402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.804429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.804570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.804596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.804755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.804784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.804974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.805185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.805211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.805406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.805431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.805616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.805641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.805802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.805827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.806027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.806052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.806189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.806214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.806376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.806402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.806560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.806585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.806746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.806772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.806904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.806929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.807099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.807124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.807259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.807285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.807430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.807456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.807607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.807633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.807797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.807824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.808007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.808032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.808226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.808251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.808416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.808441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.808628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.808653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.808809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.808837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.809029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.809054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.809192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.809216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.809348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.809374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.809540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.809565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.809751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.809776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.809937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.175 [2024-07-13 15:45:18.809968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.175 qpair failed and we were unable to recover it. 00:33:48.175 [2024-07-13 15:45:18.810109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.810135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.810274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.810299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.810472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.810497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.810662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.810688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.810820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.810847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.811020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.811046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.811234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.811259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.811449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.811474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.811615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.811643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.811816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.811845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.812032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.812057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.812246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.812409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.812435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.812627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.812653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.812815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.812840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.813035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.813061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.813218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.813243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.813440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.813466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.813631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.813656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.813817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.813842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.814038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.814064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.814250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.814275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.814433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.814459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.814621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.814646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.814780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.814805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.814970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.814995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.815123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.815148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.815310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.815335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.815497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.815522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.815676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.815701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.815904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.815929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.816081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.816106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.816293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.816318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.816506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.816531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.816696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.816721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.816963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.816989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.817113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.817138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.817304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.817329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.817466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.817491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.817677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.817706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.176 [2024-07-13 15:45:18.817840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.176 [2024-07-13 15:45:18.817872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.176 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.818015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.818041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.818207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.818232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.818369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.818394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.818553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.818578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.818739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.818765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.818926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.818953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.819143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.819168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.819306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.819332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.819494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.819520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.819706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.819732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.819919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.819945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.820101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.820128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.820326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.820352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.820507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.820532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.820697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.820722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.820886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.820912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.821074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.821099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.821263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.821289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.821420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.821445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.821639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.821663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.821825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.821851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.822037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.822063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.822204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.822230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.822368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.822392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.822556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.822581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.822742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.822767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.822925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.822952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.823110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.823135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.823297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.823322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.823477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.823501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.823658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.823683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.823840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.823876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.824061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.824087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.824220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.824246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.824381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.824406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.824568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.824593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.824752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.824777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.824942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.824968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.825127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.825158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.825344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.177 [2024-07-13 15:45:18.825369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.177 qpair failed and we were unable to recover it. 00:33:48.177 [2024-07-13 15:45:18.825528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.825552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.825708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.825734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.825922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.825948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.826132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.826157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.826337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.826362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.826546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.826571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.826739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.826764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.826901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.826927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.827112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.827137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.827291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.827316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.827474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.827500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.827657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.827682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.827851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.827883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.828044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.828070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.828206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.828231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.828394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.828420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.828577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.828603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.828766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.828791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.828951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.828977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.829163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.829188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.829345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.829370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.829562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.829587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.829758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.829783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.829935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.829961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.830090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.830116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.830287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.830312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.830452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.830477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.830639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.830665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.830827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.830852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.831023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.831048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.831182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.831209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.831336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.831361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.831521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.831546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.831685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.831710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.831877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.831904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.832065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.832091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.832224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.832250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.832401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.832426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.832553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.832582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.178 [2024-07-13 15:45:18.832755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.178 [2024-07-13 15:45:18.832781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.178 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.832965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.832991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.833146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.833171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.833306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.833332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.833493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.833518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.833649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.833676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.833838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.833863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.834035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.834060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.834220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.834245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.834403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.834428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.834590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.834616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.834804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.834832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.835018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.835044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.835183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.835209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.835394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.835420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.835559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.835584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.835741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.835766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.835956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.835981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.836137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.836162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.836325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.836351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.836543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.836568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.836727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.836752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.836938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.836964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.837117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.837142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.837328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.837353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.837518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.837543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.837701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.837726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.837892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.837918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.838083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.838108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.838265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.838291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.838475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.838500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.838661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.838686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.838817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.838842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.839015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.839041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.839227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.839252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.839381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.839406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.839563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.179 [2024-07-13 15:45:18.839588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.179 qpair failed and we were unable to recover it. 00:33:48.179 [2024-07-13 15:45:18.839780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.839805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.839950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.839976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.840142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.840168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.840324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.840350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.840514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.840539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.840675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.840701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.840888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.840914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.841080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.841106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.841294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.841319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.841481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.841506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.841662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.841687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.841877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.841903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.842029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.842054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.842180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.842205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.842397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.842422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.842607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.842632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.842822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.842848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.843020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.843045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.843204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.843229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.843362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.843387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.843542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.843567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.843708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.843733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.843898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.843925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.844064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.844089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.844251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.844277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.844467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.844493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.844653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.844814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.844840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.845037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.845063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.845198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.845227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.845413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.845438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.845590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.845614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.845750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.845777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.845944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.845971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.846160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.846186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.846343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.846368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.846529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.846554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.846711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.846736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.846893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.846919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.847055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.847080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.180 [2024-07-13 15:45:18.847212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.180 [2024-07-13 15:45:18.847236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.180 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.847375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.847400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.847558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.847585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.847782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.847808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.847936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.847961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.848097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.848123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.848271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.848296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.848468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.848493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.848680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.848705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.848854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.848887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.849028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.849053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.849213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.849238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.849401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.849426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.849587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.849614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.849776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.849801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.849937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.849963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.850104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.850129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.850289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.850315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.850497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.850522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.850704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.850729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.850917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.850943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.851104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.851129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.851285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.851310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.851469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.851494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.851660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.851685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.851840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.851871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.852059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.852084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.852218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.852243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.852375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.852402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.852531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.852561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.852766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.852791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.852929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.852955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.853115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.853140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.853302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.853328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.853489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.853515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.853680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.853705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.853890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.853917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.854060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.854087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.854277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.854496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.854521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.854685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.181 [2024-07-13 15:45:18.854711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.181 qpair failed and we were unable to recover it. 00:33:48.181 [2024-07-13 15:45:18.854894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.854920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.855080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.855106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.855269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.855295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.855481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.855507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.855664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.855689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.855819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.855846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.856017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.856043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.856174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.856199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.856392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.856417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.856554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.856579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.856738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.856767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.856951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.856977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.857141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.857165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.857307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.857332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.857457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.857481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.857642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.857667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.857862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.857893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.858058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.858085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.858248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.858274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.858464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.858489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.858645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.858670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.858805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.858831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.859042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.859067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.859233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.859258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.859421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.859446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.859586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.859612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.859743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.859770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.859936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.859962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.860121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.860150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.860281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.860306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.860477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.860502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.860667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.860692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.860854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.860887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.861024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.861050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.861214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.861240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.861397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.861422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.861581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.861606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.861768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.861793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.861953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.861979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.862140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.862165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.862352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.862376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.862514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.862540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.862710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.182 [2024-07-13 15:45:18.862736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.182 qpair failed and we were unable to recover it. 00:33:48.182 [2024-07-13 15:45:18.862878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.862903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.863035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.863062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.863221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.863247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.863432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.863458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.863621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.863646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.863808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.863833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.864002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.864028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.864214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.864239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.864399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.864424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.864563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.864588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.864770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.864795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.864955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.864980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.865171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.865197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.865360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.865385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.865539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.865565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.865750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.865775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.865896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.865922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.866091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.866116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.866243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.866269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.866459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.866484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.866638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.866663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.866847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.866882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.867056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.867081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.867241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.867266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.867429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.867454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.867598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.867627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.867818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.867843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.867983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.868009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.868167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.183 [2024-07-13 15:45:18.868194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.183 qpair failed and we were unable to recover it. 00:33:48.183 [2024-07-13 15:45:18.868381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.868406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.868541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.868566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.868760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.868785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.868922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.868948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.869112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.869137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.869301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.869327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.869480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.869505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.869663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.869688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.869890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.869916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.870102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.870128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.870317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.870342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.870523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.870549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.870708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.870734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.870895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.870921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.871108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.871134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.871264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.871289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.871418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.871443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.871608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.871633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.871770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.871796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.871986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.872013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.872165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.872190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.872357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.872382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.872566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.872591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.872751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.872777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.872969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.873161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.873186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.873311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.873336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.873523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.873548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.873707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.873733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.873863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.873907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.874093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.874117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.874301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.874326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.184 qpair failed and we were unable to recover it. 00:33:48.184 [2024-07-13 15:45:18.874486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.184 [2024-07-13 15:45:18.874511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.874701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.874726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.874915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.874941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.875105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.875129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.875268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.875297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.875485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.875511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.875695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.875720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.875882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.875917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.876081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.876106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.876269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.876294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.876457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.876482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.876641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.876666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.876823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.876848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.877019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.877044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.877230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.877255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.877442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.877467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.877632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.877657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.877817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.877842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.877983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.878009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.878166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.878191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.878376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.878401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.878559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.878584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.878719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.878745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.878919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.878945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.879102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.879127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.879297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.879322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.879449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.879474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.879635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.879660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.879796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.879821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.879946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.879972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.185 [2024-07-13 15:45:18.880125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.185 [2024-07-13 15:45:18.880151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.185 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.880282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.880308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.880468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.880494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.880654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.880680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.880835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.880860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.881029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.881055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.881209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.881234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.881419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.881445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.881626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.881652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.881821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.881846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.882027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.882052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.882237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.882262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.882389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.882414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.882571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.882595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.882728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.882758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.882913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.882940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.883104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.883129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.883284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.883309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.883496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.883521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.883675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.883700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.883894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.883920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.884059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.884084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.884223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.468 [2024-07-13 15:45:18.884432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.468 [2024-07-13 15:45:18.884457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.468 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.884611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.884636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.884793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.884818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.884974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.884999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.885192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.885217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.885361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.885387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.885544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.885570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.885741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.885766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.885898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.885924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.886105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.886284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.886310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.886484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.886509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.886666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.886693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.886850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.886883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.887022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.887047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.887211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.887235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.887370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.887395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.887558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.887583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.887751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.887776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.887969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.887995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.888151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.888176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.888310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.888335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.888495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.888521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.888703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.888728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.888914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.888940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.889104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.889130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.889325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.889350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.889512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.889537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.889696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.889722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.889910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.889936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.890123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.890149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.890307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.890337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.890523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.469 [2024-07-13 15:45:18.890548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.469 qpair failed and we were unable to recover it. 00:33:48.469 [2024-07-13 15:45:18.890735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.890759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.890923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.890949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.891110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.891135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.891288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.891313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.891475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.891501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.891663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.891688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.891879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.891924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.892111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.892136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.892323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.892348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.892502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.892528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.892680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.892706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.892873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.892899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.893035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.893060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.893243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.893268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.893401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.893426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.893594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.893620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.893758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.893784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.893946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.893972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.894137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.894162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.894323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.894348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.894484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.894509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.894667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.894693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.894849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.894882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.895021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.895048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.895204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.895230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.895394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.895580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.895605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.895789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.895815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.896004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.896030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.896187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.896213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.470 qpair failed and we were unable to recover it. 00:33:48.470 [2024-07-13 15:45:18.896375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.470 [2024-07-13 15:45:18.896400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.896585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.896610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.896738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.896763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.896917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.896942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.897114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.897140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.897292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.897318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.897467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.897492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.897626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.897653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.897811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.897841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.898025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.898051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.898241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.898266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.898427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.898453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.898589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.898615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.898780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.898806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.898940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.898966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.899107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.899134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.899322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.899347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.899534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.899559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.899722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.899747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.899887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.899911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.900073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.900098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.900263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.900288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.900457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.900482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.900644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.900669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.900859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.900892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.901054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.901079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.901244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.901271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.901464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.901490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.901639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.901664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.901847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.901890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.902025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.902051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.471 qpair failed and we were unable to recover it. 00:33:48.471 [2024-07-13 15:45:18.902179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.471 [2024-07-13 15:45:18.902203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.902389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.902415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.902543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.902569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.902737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.902762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.902911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.902937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.903096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.903122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.903287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.903312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.903473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.903498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.903687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.903712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.903895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.903921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.904061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.904086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.904252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.904277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.904434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.904459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.904600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.904624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.904786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.904812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.904971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.904997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.905160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.905186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.905347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.905376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.905565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.905590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.905778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.905803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.905985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.906011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.906200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.906225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.906384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.906409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.906567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.906593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.906780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.906805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.906962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.906989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.907120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.907146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.907275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.907302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.907459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.907484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.907642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.907668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.907855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.907888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.908046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.908071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.908234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.908259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.908446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.908471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.472 qpair failed and we were unable to recover it. 00:33:48.472 [2024-07-13 15:45:18.908634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.472 [2024-07-13 15:45:18.908659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.908856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.908888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.909025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.909050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.909210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.909236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.909392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.909418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.909573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.909598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.909732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.909758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.909928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.909954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.910117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.910143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.910297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.910322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.910487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.910514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.910678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.910703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.910864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.910897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.911062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.911088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.911251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.911277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.911435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.911460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.911594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.911622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.911782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.911807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.911940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.911965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.912130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.912156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.912313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.912339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.912502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.912528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.912679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.912704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.912896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.912926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.913086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.473 [2024-07-13 15:45:18.913112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.473 qpair failed and we were unable to recover it. 00:33:48.473 [2024-07-13 15:45:18.913257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.913282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.913468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.913493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.913618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.913643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.913802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.913827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.913980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.914005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.914158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.914184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.914344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.914369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.914500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.914525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.914683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.914709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.914878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.914905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.915060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.915085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.915242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.915267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.915444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.915470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.915595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.915622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.915808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.915833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.915999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.916025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.916208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.916234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.916395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.916421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.916579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.916604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.916763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.916787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.916977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.917004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.917145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.917170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.917305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.917331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.917471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.917497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.917655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.917680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.917817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.917842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.918020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.918046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.918207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.918232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.918419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.918445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.918600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.918625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.918785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.918810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.918971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.918997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.919159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.474 [2024-07-13 15:45:18.919184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.474 qpair failed and we were unable to recover it. 00:33:48.474 [2024-07-13 15:45:18.919334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.919359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.919519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.919544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.919717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.919742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.919943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.920121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.920146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.920307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.920336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.920529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.920555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.920740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.920765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.920899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.920925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.921091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.921116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.921276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.921301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.921459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.921485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.921621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.921647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.921808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.921833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.922003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.922029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.922218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.922243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.922407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.922433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.922596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.922621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.922783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.922808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.922946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.922972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.923126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.923151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.923289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.923314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.923500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.923525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.923689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.923714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.923877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.923903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.924069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.924095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.924254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.924279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.924442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.924467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.924603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.924628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.924760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.924785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.924947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.924973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.925136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.925162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.925308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.925333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.925487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.925512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.475 qpair failed and we were unable to recover it. 00:33:48.475 [2024-07-13 15:45:18.925668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.475 [2024-07-13 15:45:18.925693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.925860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.925903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.926082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.926107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.926295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.926320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.926452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.926477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.926634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.926659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.926793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.926818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.926960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.926986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.927119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.927143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.927275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.927300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.927488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.927513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.927671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.927701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.927829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.927855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.928050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.928075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.928198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.928224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.928409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.928435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.928569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.928594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.928798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.928826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.928987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.929013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.929178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.929203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.929394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.929419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.929548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.929573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.929728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.929754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.929945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.929971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.930134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.930159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.930301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.930326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.930510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.930536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.930709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.930734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.930890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.930916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.931105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.931131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.931268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.931293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.931477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.931502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.931667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.931693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.476 [2024-07-13 15:45:18.931855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.476 [2024-07-13 15:45:18.931886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.476 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.932060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.932085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.932221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.932246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.932399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.932551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.932578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.932754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.932779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.932947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.932974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.933101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.933127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.933317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.933342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.933474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.933499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.933626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.933651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.933807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.933832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.933984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.934010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.934170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.934195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.934367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.934392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.934558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.934583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.934739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.934768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.934974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.935000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.935161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.935186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.935375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.935400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.935589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.935614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.935751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.935777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.935909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.935935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.936073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.936099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.936281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.936306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.936463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.936489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.936649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.936674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.936857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.936888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.937043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.937068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.937254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.937279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.937416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.937442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.937631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.937656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.937813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.937839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.938009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.938034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.938189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.938214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.477 qpair failed and we were unable to recover it. 00:33:48.477 [2024-07-13 15:45:18.938398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.477 [2024-07-13 15:45:18.938423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.938564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.938589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.938750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.938775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.938932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.938958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.939091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.939116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.939275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.939299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.939468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.939493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.939630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.939655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.939815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.939840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.940020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.940046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.940213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.940243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.940378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.940403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.940590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.940615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.940769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.940794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.940983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.941009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.941152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.941177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.941348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.941373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.941536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.941795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.941823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.942001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.942027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.942212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.942237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.942418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.942443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.942568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.942593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.942771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.942796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.942935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.942961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.943147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.943172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.943358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.943383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.478 qpair failed and we were unable to recover it. 00:33:48.478 [2024-07-13 15:45:18.943523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.478 [2024-07-13 15:45:18.943548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.943733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.943759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.943919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.943946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.944104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.944130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.944267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.944293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.944483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.944508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.944662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.944687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.944839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.944864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.945061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.945086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.945249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.945274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.945417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.945443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.945606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.945631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.945796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.945823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.945989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.946016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.946155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.946182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.946372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.946397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.946536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.946561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.946721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.946746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.946969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.946994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.947119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.947144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.947305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.947330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.947493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.947517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.947653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.947678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.947839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.947873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.948061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.948086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.948246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.948272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.948435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.948460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.948622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.948648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.948785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.948811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.948947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.948974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.949135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.949160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.949297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.949324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.949511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.949536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.949698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.949723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.479 qpair failed and we were unable to recover it. 00:33:48.479 [2024-07-13 15:45:18.949859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.479 [2024-07-13 15:45:18.949902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.950064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.950090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.950277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.950302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.950445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.950470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.950631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.950656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.950818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.950843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.951015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.951042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.951204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.951230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.951361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.951386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.951524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.951550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.951710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.951735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.951932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.951958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.952143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.952168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.952355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.952381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.952520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.952545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.952705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.952730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.952896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.952922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.953075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.953100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.953259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.953284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.953442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.953468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.953654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.953679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.953862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.953894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.954056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.954082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.954248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.954273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.954432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.954457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.954587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.954612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.954739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.954764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.954916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.954942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.955103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.955128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.955287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.955315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.955476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.955501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.955687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.955711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.955875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.955901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.956038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.956063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.956200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.956224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.956357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.956381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.956566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.956591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.480 [2024-07-13 15:45:18.956777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.480 [2024-07-13 15:45:18.956801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.480 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.956964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.956991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.957151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.957176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.957338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.957363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.957527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.957552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.957707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.957732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.957891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.957917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.958101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.958126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.958248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.958273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.958404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.958429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.958594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.958621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.958761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.958786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.958920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.958945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.959110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.959135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.959325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.959350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.959488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.959513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.959673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.959698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.959855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.959886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.960016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.960041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.960209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.960234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.960370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.960395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.960531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.960556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.960716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.960741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.960883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.960909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.961033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.961059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.961201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.961226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.961399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.961424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.961608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.961633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.961761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.961787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.961925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.961951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.962083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.962108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.962264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.962290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.962448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.962477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.962640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.962667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.962856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.962892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.963084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.963109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.963292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.481 [2024-07-13 15:45:18.963317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.481 qpair failed and we were unable to recover it. 00:33:48.481 [2024-07-13 15:45:18.963503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.963527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.963682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.963707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.963846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.963879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.964068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.964093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.964255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.964280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.964439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.964465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.964651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.964676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.964841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.964871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.965011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.965037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.965196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.965221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.965355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.965380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.965565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.965590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.965750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.965775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.965908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.965934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.966088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.966114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.966303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.966328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.966492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.966517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.966682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.966708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.966874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.966900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.967037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.967062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.967221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.967246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.967409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.967434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.967631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.967657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.967815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.967840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.968034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.968059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.968222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.968247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.968411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.968436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.968593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.968618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.968805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.968831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.968995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.969020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.969184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.969209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.969339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.969364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.969518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.969543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.969707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.969733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.969884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.969910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.970079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.482 [2024-07-13 15:45:18.970109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.482 qpair failed and we were unable to recover it. 00:33:48.482 [2024-07-13 15:45:18.970265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.970290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.970449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.970474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.970666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.970691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.970823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.970848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.971011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.971037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.971191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.971217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.971371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.971396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.971581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.971606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.971744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.971769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.971937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.971963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.972128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.972155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.972342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.972367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.972522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.972547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.972685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.972710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.972901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.972927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.973116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.973141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.973303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.973328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.973461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.973486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.973648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.973673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.973814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.973839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.974041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.974067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.974203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.974228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.974391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.974417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.974607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.974632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.974819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.974846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.975072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.975098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.975262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.975288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.975475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.483 [2024-07-13 15:45:18.975500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.483 qpair failed and we were unable to recover it. 00:33:48.483 [2024-07-13 15:45:18.975638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.975664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.975824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.975850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.976012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.976037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.976206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.976232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.976387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.976412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.976567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.976592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.976786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.976811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.976942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.976967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.977132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.977157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.977315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.977342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.977504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.977530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.977718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.977747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.977905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.977931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.978093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.978117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.978260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.978286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.978448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.978473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.978638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.978663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.978823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.978848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.979021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.979046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.979207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.979232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.979372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.979397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.979528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.979553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.979714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.979739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.979960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.979994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.980161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.980185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.980323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.980348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.980533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.980558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.980696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.980721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.980905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.980931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.981063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.981087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.981265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.981290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.981456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.981481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.981668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.981693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.484 [2024-07-13 15:45:18.981854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.484 [2024-07-13 15:45:18.981893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.484 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.982028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.982053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.982211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.982238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.982403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.982428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.982592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.982617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.982758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.982787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.982984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.983010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.983137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.983162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.983326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.983351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.983486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.983511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.983696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.983721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.983883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.983910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.984075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.984100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.984262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.984287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.984451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.984476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.984647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.984672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.984809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.984834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.985026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.985052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.985189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.985218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.985407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.985432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.985617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.985643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.985802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.985828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.985993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.986018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.986153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.986178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.986331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.986374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.986580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.986608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.986782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.986811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.987015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.987042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.987226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.987254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.987418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.987443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.987601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.987626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.987823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.987848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.988028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.988054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.988246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.988271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.988437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.988465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.988671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.485 [2024-07-13 15:45:18.988696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.485 qpair failed and we were unable to recover it. 00:33:48.485 [2024-07-13 15:45:18.988864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.988908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.989069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.989094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.989283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.989308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.989494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.989520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.989653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.989695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.989883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.989912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.990091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.990119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.990326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.990351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.990504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.990529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.990723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.990749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.990903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.990929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.991097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.991123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.991330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.991359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.991535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.991564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.991766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.991794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.991953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.991980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.992148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.992173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.992358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.992383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.992516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.992541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.992702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.992727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.992887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.992914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.993099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.993126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.993296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.993328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.993503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.993529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.993695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.993720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.993886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.993919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.994082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.994108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.994232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.994257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.994394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.994421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.994572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.994597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.994783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.994810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.995002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.995028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.995204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.995232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.995396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.486 [2024-07-13 15:45:18.995422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.486 qpair failed and we were unable to recover it. 00:33:48.486 [2024-07-13 15:45:18.995576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.995601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.995761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.995786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.995926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.995952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.996111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.996137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.996294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.996319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.996503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.996528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.996734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.996759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.996892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.996918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.997106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.997131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.997289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.997316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.997496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.997524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.997703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.997731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.997920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.997950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.998134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.998161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.998319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.998347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.998527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.998554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.998729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.998757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.998935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.998960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.999101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.999126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.999287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.999329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.999468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.999498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.999682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.999707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:18.999862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:18.999895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.000066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.000091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.000288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.000314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.000447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.000473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.000614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.000640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.000849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.000886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.001059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.001091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.001275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.001302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.001434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.001459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.001616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.001642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.001800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.001825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.001990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.487 [2024-07-13 15:45:19.002016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.487 qpair failed and we were unable to recover it. 00:33:48.487 [2024-07-13 15:45:19.002176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.002201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.002370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.002395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.002542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.002571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.002761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.002786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.002939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.002965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.003154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.003182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.003348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.003376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.003546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.003571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.003717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.003742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.003876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.003904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.004064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.004089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.004277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.004302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.004454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.004482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.004663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.004691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.004882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.004911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.005090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.005115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.005247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.005291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.005469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.005494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.005652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.005694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.005881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.005908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.006084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.006112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.006301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.006327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.006482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.006507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.006694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.006719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.006882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.006908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.007041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.007067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.007228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.007253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.007442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.007467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.488 [2024-07-13 15:45:19.007628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.488 [2024-07-13 15:45:19.007653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.488 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.007859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.007895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.008041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.008069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.008242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.008267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.008437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.008465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.008643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.008671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.008858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.008909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.009082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.009108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.009268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.009293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.009451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.009493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.009674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.009702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.009878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.009922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.010082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.010108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.010267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.010293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.010477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.010502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.010663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.010688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.010843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.010876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.011085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.011110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.011276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.011300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.011459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.011484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.011673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.011857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.011890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.012051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.012077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.012258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.012283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.012463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.012490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.012708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.012732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.012917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.012944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.013152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.013178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.013340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.013365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.013511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.013536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.013662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.013687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.013842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.489 [2024-07-13 15:45:19.013872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.489 qpair failed and we were unable to recover it. 00:33:48.489 [2024-07-13 15:45:19.014052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.014080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.014128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x600480 (9): Bad file descriptor 00:33:48.490 [2024-07-13 15:45:19.014367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.014405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.014596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.014634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.014828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.014860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.015056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.015086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.015269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.015297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.015473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.015501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.015706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.015739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.015952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.015981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.016168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.016214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.016418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.016450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.016630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.016659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.016874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.016905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.017095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.017135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.017309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.017341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.017551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.017579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.017802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.017849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.018072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.018098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.018260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.018285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.018446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.018471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.018607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.018633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.018798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.018824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.018992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.019019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.019151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.019178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.019335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.019361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.019550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.019578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.019728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.019753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.019911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.019937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.020106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.020131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.020317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.020341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.020546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.020574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.020749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.020777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.020935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.020961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.021151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.021175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.490 [2024-07-13 15:45:19.021335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.490 [2024-07-13 15:45:19.021360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.490 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.021520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.021545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.021733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.021758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.021929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.021954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.022119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.022144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.022351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.022379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.022593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.022617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.022769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.022794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.022948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.022975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.023138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.023164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.023339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.023364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.023516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.023541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.023754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.023782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.023963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.023989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.024152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.024177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.024338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.024363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.024528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.024554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.024719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.024746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.024951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.024977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.025165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.025190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.025402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.025434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.025650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.025675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.025840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.025874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.026080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.026105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.026266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.026292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.026428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.026455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.026589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.026615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.026838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.026873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.027028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.027053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.027237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.027265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.027442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.027471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.027680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.027705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.027874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.027900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.028065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.491 [2024-07-13 15:45:19.028090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.491 qpair failed and we were unable to recover it. 00:33:48.491 [2024-07-13 15:45:19.028256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.028281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.028420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.028446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.028585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.028610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.028813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.028838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.029009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.029035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.029193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.029218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.029403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.029428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.029605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.029633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.029807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.029835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.030056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.030082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.030258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.030286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.030457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.030485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.030644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.030670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.030832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.030858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.031054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.031080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.031272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.031297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.031477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.031506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.031709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.031737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.031920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.031946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.032107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.032134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.032297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.032323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.032484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.032509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.032710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.032738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.032896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.032923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.033085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.033111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.033272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.033300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.033477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.033511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.033672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.033697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.033855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.033888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.034052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.034077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.034214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.034239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.034405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.034430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.034604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.034631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.034815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.492 [2024-07-13 15:45:19.034840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.492 qpair failed and we were unable to recover it. 00:33:48.492 [2024-07-13 15:45:19.035003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.035029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.035167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.035192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.035377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.035567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.035592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.035751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.035777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.035967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.035993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.036188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.036216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.036419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.036447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.036602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.036628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.036764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.036789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.036950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.036976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.037142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.037167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.037305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.037331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.037506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.037531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.037725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.037750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.037934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.037962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.038140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.038168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.038351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.038377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.038505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.038532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.038723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.038752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.038957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.038983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.039198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.039227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.039400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.039427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.039608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.039634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.039779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.039804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.039964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.039990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.040153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.040179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.040313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.040338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.040513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.040538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.040722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.040747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.040934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.040960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.041096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.041121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.041250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.041279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.493 [2024-07-13 15:45:19.041463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.493 [2024-07-13 15:45:19.041488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.493 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.041717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.041742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.041924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.041950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.042111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.042137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.042323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.042348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.042480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.042505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.042700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.042725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.042883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.042912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.043119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.043144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.043266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.043291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.043453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.043479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.043641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.043666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.043789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.043814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.043986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.044012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.044136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.044163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.044295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.044322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.044538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.044567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.044739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.044767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.044954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.044981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.045167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.045192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.045328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.045353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.045538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.045566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.045770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.045798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.045982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.046008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.046167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.046192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.046376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.046401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.046591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.494 [2024-07-13 15:45:19.046616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.494 qpair failed and we were unable to recover it. 00:33:48.494 [2024-07-13 15:45:19.046776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.046801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.046968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.046994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.047122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.047148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.047276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.047317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.047497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.047527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.047714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.047741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.047906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.047932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.048069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.048096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.048295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.048321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.048479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.048504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.048657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.048686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.048901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.048927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.049095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.049125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.049313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.049527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.049552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.049687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.049712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.049878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.049904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.050064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.050089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.050273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.050301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.050475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.050502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.050701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.050729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.050911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.050938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.051096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.051122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.051277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.051302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.051437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.051463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.051647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.051676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.051886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.051912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.052075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.052101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.052233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.052258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.052456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.052482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.052636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.052661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.052793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.052818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.052960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.052986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.053120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.495 [2024-07-13 15:45:19.053146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.495 qpair failed and we were unable to recover it. 00:33:48.495 [2024-07-13 15:45:19.053331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.053359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.053566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.053591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.053754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.053780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.053912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.053938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.054132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.054157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.054329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.054355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.054490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.054515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.054708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.054733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.054894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.054920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.055104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.055129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.055289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.055314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.055471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.055496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.055682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.055707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.055860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.055896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.056079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.056104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.056252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.056279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.056436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.056461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.056610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.056635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.056790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.056819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.056978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.057004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.057168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.057193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.057350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.057376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.057562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.057587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.057767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.057794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.057969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.057998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.058206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.058231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.058392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.058417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.058568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.058593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.058749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.058775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.058941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.058967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.059155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.059183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.059393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.059419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.059604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.059632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.059807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.059835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.496 [2024-07-13 15:45:19.060022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.496 [2024-07-13 15:45:19.060048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.496 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.060186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.060212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.060376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.060402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.060558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.060583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.060755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.060783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.060957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.060986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.061174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.061199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.061324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.061349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.061533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.061558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.061740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.061765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.061944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.061973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.062113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.062142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.062321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.062346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.062531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.062559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.062737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.062765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.062942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.062968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.063110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.063135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.063323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.063348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.063479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.063504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.063663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.063688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.063845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.063875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.064043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.064068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.064223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.064249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.064440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.064465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.064627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.064657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.064818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.064844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.064977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.065003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.065195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.065220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.065406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.065434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.065602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.065630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.065779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.065804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.065960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.065986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.066173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.066198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.066360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.066385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.497 [2024-07-13 15:45:19.066525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.497 [2024-07-13 15:45:19.066550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.497 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.066763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.066791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.066984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.067010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.067197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.067222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.067381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.067407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.067568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.067593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.067721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.067746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.067906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.067932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.068091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.068116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.068280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.068306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.068496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.068521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.068681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.068706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.068887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.068929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.069092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.069117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.069330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.069355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.069539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.069564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.069725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.069750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.069919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.069945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.070073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.070098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.070223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.070264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.070455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.070481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.070667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.070692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.070913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.070942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.071115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.071141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.071278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.071304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.071440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.071465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.071653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.071678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.071827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.071855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.072067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.072095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.072238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.072263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.072445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.072478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.072656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.072684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.072884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.072910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.073073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.498 [2024-07-13 15:45:19.073098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.498 qpair failed and we were unable to recover it. 00:33:48.498 [2024-07-13 15:45:19.073259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.073285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.073447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.073472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.073606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.073632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.073791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.073816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.073975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.074001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.074185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.074343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.074368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.074531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.074557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.074716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.074741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.074899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.074925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.075090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.075115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.075277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.075303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.075517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.075542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.075700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.075725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.075877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.075903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.076036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.076062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.076222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.076247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.076381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.076406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.076560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.076590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.076804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.076829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.076995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.077021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.077179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.077204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.077367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.077392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.077557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.077582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.077769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.077793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.077981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.078007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.078196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.499 [2024-07-13 15:45:19.078221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.499 qpair failed and we were unable to recover it. 00:33:48.499 [2024-07-13 15:45:19.078347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.078372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.078533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.078559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.078723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.078748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.078908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.078933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.079102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.079127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.079262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.079287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.079428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.079452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.079610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.079637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.079796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.079821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.079993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.080023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.080156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.080181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.080342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.080367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.080499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.080540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.080751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.080779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.080992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.081018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.081179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.081205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.081392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.081418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.081552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.081577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.081742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.081767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.081959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.081985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.082146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.082171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.082304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.082329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.082489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.082514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.082674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.082699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.082829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.082855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.083049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.083074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.083268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.083296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.083473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.083503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.083710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.083735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.083934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.083971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.084143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.084170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.084356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.084381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.084513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.084539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.084727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.084752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.500 qpair failed and we were unable to recover it. 00:33:48.500 [2024-07-13 15:45:19.084917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.500 [2024-07-13 15:45:19.084943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.085105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.085133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.085320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.085345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.085508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.085533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.085690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.085715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.085893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.085919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.086104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.086129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.086282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.086310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.086512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.086540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.086719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.086744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.086927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.086956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.087127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.087157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.087309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.087521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.087547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.087739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.087768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.087955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.087987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.088141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.088184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.088350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.088378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.088583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.088608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.088747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.088772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.088937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.088980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.089159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.089184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.089350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.089375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.089504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.089530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.089660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.089701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.089894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.089920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.090108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.090150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.090306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.090331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.090539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.090567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.090788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.090829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.091044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.091069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.091266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.091294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.091447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.091739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.091767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.501 [2024-07-13 15:45:19.091931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.501 [2024-07-13 15:45:19.091957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.501 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.092086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.092112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.092312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.092340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.092517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.092545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.092696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.092724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.092876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.092902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.093087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.093129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.093331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.093359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.093540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.093565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.093715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.093743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.093917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.093946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.094131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.094156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.094340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.094365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.094552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.094580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.094776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.094801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.094989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.095019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.095202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.095231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.095374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.095400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.095535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.095561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.095791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.095819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.096008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.096034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.096221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.096254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.096431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.096460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.096667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.096693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.096878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.096906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.097106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.097133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.097309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.097335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.097515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.097543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.097699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.097726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.097922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.097949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.098158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.098186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.098372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.098399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.098579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.098605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.098761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.502 [2024-07-13 15:45:19.098787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.502 qpair failed and we were unable to recover it. 00:33:48.502 [2024-07-13 15:45:19.098957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.098986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.099173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.099198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.099357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.099386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.099541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.099568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.099724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.099920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.099945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.100078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.100120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.100298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.100323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.100527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.100554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.100773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.100798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.101025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.101051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.101235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.101263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.101447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.101474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.101656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.101681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.101873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.101901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.102055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.102082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.102291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.102316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.102508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.102536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.102713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.102741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.102957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.102983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.103185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.103213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.103388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.103414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.103600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.103625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.103761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.103788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.103950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.103993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.104207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.104232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.104377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.104402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.104566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.104595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.104761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.104786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.104947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.104972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.105153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.105180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.105357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.105382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.105528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.503 [2024-07-13 15:45:19.105556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.503 qpair failed and we were unable to recover it. 00:33:48.503 [2024-07-13 15:45:19.105728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.105755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.105928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.105953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.106162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.106190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.106365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.106392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.106549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.106575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.106732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.106757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.106974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.107000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.107135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.107160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.107318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.107343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.107569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.107594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.107779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.107806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.108014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.108040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.108229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.108257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.108439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.108465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.108601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.108626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.108785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.108810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.109043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.109069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.109246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.109274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.109445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.109473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.109654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.109679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.109887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.109916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.110094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.110126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.110311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.110336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.110547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.110574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.110715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.110743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.110948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.110974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.111192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.111220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.504 qpair failed and we were unable to recover it. 00:33:48.504 [2024-07-13 15:45:19.111397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.504 [2024-07-13 15:45:19.111425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.111597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.111622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.111833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.111860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.112072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.112100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.112294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.112319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.112523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.112551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.112735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.112762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.112940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.112967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.113154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.113182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.113348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.113376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.113555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.113580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.113760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.113788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.113986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.114015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.114200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.114225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.114434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.114462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.114617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.114645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.114844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.114880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.115042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.115066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.115270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.115298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.115477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.115502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.115644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.115668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.115832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.115893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.116100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.116125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.116307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.116334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.116506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.116534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.116739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.116764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.116927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.116952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.117167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.117195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.117372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.117399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.117606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.117634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.117822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.117847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.118017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.118044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.118251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.118279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.118479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.118507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.118666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.505 [2024-07-13 15:45:19.118696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.505 qpair failed and we were unable to recover it. 00:33:48.505 [2024-07-13 15:45:19.118831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.118856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.119083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.119111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.119321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.119346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.119502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.119530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.119708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.119737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.119899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.119926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.120089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.120131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.120298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.120326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.120536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.120562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.120718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.120748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.120931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.120959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.121147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.121173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.121307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.121333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.121498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.121525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.121688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.121713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.121914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.121941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.122093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.122118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.122288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.122313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.122436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.122476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.122662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.122687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.122889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.122915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.123098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.123126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.123272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.123301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.123510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.123535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.123698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.123723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.123860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.123892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.124051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.124076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.124277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.124305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.124506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.124534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.124715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.124740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.124948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.124977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.125126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.125155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.125340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.125364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.125516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.125544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.125698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.125726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.506 qpair failed and we were unable to recover it. 00:33:48.506 [2024-07-13 15:45:19.125932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.506 [2024-07-13 15:45:19.125957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.126094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.126119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.126321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.126349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.126502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.126527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.126690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.126719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.126879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.126906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.127065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.127090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.127248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.127276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.127423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.127451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.127604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.127646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.127815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.127843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.128031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.128056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.128215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.128240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.128398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.128424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.128572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.128601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.128784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.128810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.128971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.128997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.129182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.129210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.129386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.129411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.129549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.129575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.129749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.129777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.129957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.129983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.130148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.130173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.130312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.130339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.130525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.130550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.130728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.130756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.130945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.130974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.131181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.131206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.131363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.131393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.131572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.131599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.131763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.131788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.131985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.132011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.132232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.132260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.132441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.507 [2024-07-13 15:45:19.132466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.507 qpair failed and we were unable to recover it. 00:33:48.507 [2024-07-13 15:45:19.132638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.132666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.132845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.132879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.133060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.133086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.133223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.133249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.133389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.133415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.133575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.133600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.133739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.133781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.133955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.133984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.134132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.134158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.134301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.134326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.134465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.134496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.134626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.134650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.134806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.134835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.135035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.135061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.135225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.135250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.135438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.135465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.135639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.135667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.135843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.135879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.136021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.136047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.136208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.136236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.136438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.136464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.136649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.136677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.136860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.136891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.137055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.137080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.137268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.137298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.137503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.137528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.137692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.137876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.137902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.138110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.138138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.138317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.138342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.138467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.138510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.138716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.138744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.138929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.138955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.139135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.139163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.508 [2024-07-13 15:45:19.139318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-07-13 15:45:19.139348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.508 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.139555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.139580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.139730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.139757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.139935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.139963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.140117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.140142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.140321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.140349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.140552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.140579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.140755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.140780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.140964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.140992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.141169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.141198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.141357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.141383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.141591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.141619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.141828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.141853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.142020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.142045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.142235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.142260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.142433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.142461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.142664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.142693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.142876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.142919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.143106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.143131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.143319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.143345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.143554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.143581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.143724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.143752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.143958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.143984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.144149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.144174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.144304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.144329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.144512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.144537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.144720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.144745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.509 qpair failed and we were unable to recover it. 00:33:48.509 [2024-07-13 15:45:19.144914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-07-13 15:45:19.144940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.145096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.145121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.145284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.145309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.145522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.145550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.145736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.145761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.145932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.145961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.146136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.146165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.146350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.146375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.146533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.146558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.146796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.146821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.147006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.147032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.147219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.147247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.147462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.147487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.147650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.147675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.147807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.147832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.148017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.148045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.148248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.148273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.148452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.148481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.148683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.148711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.148923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.148948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.149122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.149150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.149316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.149344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.149525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.149550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.149715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.149740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.149897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.149923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.150080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.150106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.150292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.150320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.150467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.150495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.150704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.150729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.150856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.150895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.151036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.151061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.151262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.151286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.151465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.151493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.510 qpair failed and we were unable to recover it. 00:33:48.510 [2024-07-13 15:45:19.151673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-07-13 15:45:19.151701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.151855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.151887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.152049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.152074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.152253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.152281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.152455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.152480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.152655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.152683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.152855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.152890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.153069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.153093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.153306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.153333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.153518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.153543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.153711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.153736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.153911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.153940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.154120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.154149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.154326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.154350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.154535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.154563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.154698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.154726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.154909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.154934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.155092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.155120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.155287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.155314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.155495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.155521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.155723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.155751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.155954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.155979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.156164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.156189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.156394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.156422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.156592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.156620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.156803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.156828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.157024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.157050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.157258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.157286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.157461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.157486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.157662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.157689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.157877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.157904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.158092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.158117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.158275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.158304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.158505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.158533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.158721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.158746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.158884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-07-13 15:45:19.158927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.511 qpair failed and we were unable to recover it. 00:33:48.511 [2024-07-13 15:45:19.159133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.159165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.159338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.159363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.159568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.159596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.159754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.159780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.159952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.159978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.160157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.160185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.160328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.160356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.160560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.160585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.160755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.160783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.160916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.160944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.161148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.161173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.161350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.161379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.161548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.161576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.161713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.161738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.161919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.161963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.162141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.162166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.162321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.162346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.162555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.162583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.162729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.162757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.162915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.162941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.163082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.163107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.163267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.163308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.163513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.163538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.163677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.163702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.163907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.163936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.164117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.164143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.164323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.164353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.164540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.164565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.164721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.164750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.164941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.164967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.165167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.165195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.165384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.165409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.165571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.165613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.165826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.512 [2024-07-13 15:45:19.165854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.512 qpair failed and we were unable to recover it. 00:33:48.512 [2024-07-13 15:45:19.166049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.166073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.166235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.166261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.166393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.166418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.166621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.166646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.166776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.166801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.166965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.166991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.167126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.167155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.167285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.167311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.167478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.167520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.167679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.167705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.167891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.167935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.168087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.168115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.168318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.168344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.168478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.168505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.168675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.168719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.168899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.168925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.169099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.169128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.169342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.169367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.169555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.169580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.169735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.169763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.169980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.170009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.170189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.170214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.170389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.170416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.170592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.170620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.170797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.170822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.170991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.171017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.171204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.171229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.171495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.171520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.171691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.171719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.171886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.171914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.172069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.172094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.172261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.172302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.172512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.172537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.172676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.172702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.172911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.172940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.173141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.173349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.173375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.513 qpair failed and we were unable to recover it. 00:33:48.513 [2024-07-13 15:45:19.173532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.513 [2024-07-13 15:45:19.173560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.173735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.173763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.173944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.173970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.174134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.174159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.174337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.174365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.174529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.174554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.174691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.174716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.174902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.174931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.175094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.175120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.175282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.175314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.175525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.175553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.175764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.175789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.175980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.176006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.176149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.176174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.176337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.176362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.176570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.176598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.176802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.176830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.177014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.177040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.177184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.177210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.177414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.177442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.177622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.177646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.177824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.177852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.178016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.178238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.178264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.178441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.178471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.178672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.178700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.178979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.179006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.179191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.179221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.179396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.179425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.179635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.179660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.514 [2024-07-13 15:45:19.179799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.514 [2024-07-13 15:45:19.179826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.514 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.179993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.180037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.180243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.180268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.180433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.180458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.180633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.180660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.180870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.180896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.181061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.181089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.181264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.181291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.181462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.181487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.181617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.181660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.181840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.181874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.182022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.182048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.182243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.182271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.182454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.182481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.182635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.182660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.182786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.182826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.183009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.183037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.183184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.183209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.183414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.183441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.183622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.183656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.183832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.183858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.184010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.184038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.184188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.184218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.184407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.184432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.184619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.184645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.184830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.184858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.185052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.185077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.185260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.185288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.185463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.185491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.185673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.185698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.185904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.185933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.186107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.186135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.186323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.186348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.186530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.186558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.186736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.186764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.186940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.186966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.187171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.187199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.187371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.187399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.187553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.515 [2024-07-13 15:45:19.187578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.515 qpair failed and we were unable to recover it. 00:33:48.515 [2024-07-13 15:45:19.187742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.187767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.187939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.187976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.188135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.188161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.188368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.188397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.188546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.188575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.188753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.188778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.188939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.188967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.189166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.189211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.189431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.189694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.189726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.189940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.189969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.190179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.190205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.190362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.190390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.190593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.190620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.190802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.190828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.190988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.191017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.191172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.191200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.191375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.191400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.191555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.191583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.191787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.191815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.191997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.192023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.192194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.192219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.192416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.192442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.192631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.192842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.192877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.193063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.193088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.193257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.193282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.193411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.193437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.193601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.193644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.193850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.193884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.194085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.194111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.194302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.194329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.194482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.194507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.194672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.194697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.194859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.194901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.195062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.195087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.195241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.195266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.195448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.195475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.195679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.195704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.516 qpair failed and we were unable to recover it. 00:33:48.516 [2024-07-13 15:45:19.195920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.516 [2024-07-13 15:45:19.195946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.196085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.196110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.196269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.196294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.196475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.196502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.196775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.196823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.197038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.197063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.197246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.197274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.197428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.197457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.197635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.197665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.197876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.197905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.198110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.198138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.198318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.198343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.198464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.198506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.198781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.198832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.199026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.199052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.199225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.199252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.199442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.199468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.199654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.199680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.199819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.199843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.199997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.200028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.200205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.200234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.200440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.200472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.200801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.200855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.201071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.201100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.201299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.201331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.201535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.201566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.201746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.201775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.201958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.201987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.202175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.202219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.202421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.202447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.202635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.202663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.202840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.202875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.203083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.203108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.203242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.203268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.203531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.203582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.203780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.203806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.203997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.204023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.204209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.517 [2024-07-13 15:45:19.204238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.517 qpair failed and we were unable to recover it. 00:33:48.517 [2024-07-13 15:45:19.204420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.204445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.204630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.204658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.204872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.204901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.205087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.205112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.205294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.205322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.205610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.205665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.205848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.205879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.206047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.206072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.206253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.206281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.206490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.206515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.206723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.206756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.206944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.206974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.207187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.207212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.207392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.207421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.207593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.207638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.207810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.207838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.208108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.208134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.208273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.208316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.208496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.208522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.208700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.208728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.208897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.208939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.209127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.209152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.209328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.209356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.209526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.209572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.209746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.209771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.209938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.518 [2024-07-13 15:45:19.209964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.518 qpair failed and we were unable to recover it. 00:33:48.518 [2024-07-13 15:45:19.210134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.210178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.802 [2024-07-13 15:45:19.210348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.210383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1262515 Killed "${NVMF_APP[@]}" "$@" 00:33:48.802 [2024-07-13 15:45:19.210547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.210582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.802 [2024-07-13 15:45:19.210762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.210794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.802 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:48.802 [2024-07-13 15:45:19.211056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.211089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.802 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:48.802 [2024-07-13 15:45:19.211326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.211364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.802 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:48.802 [2024-07-13 15:45:19.211565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.802 [2024-07-13 15:45:19.211603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.802 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.803 [2024-07-13 15:45:19.211836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.211881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.212045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.212078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.212289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.212329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.212523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.212549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.212730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.212758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.212953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.212979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.213113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.213140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.213275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.213320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.213501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.213529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.213715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.213740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.213907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.213935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.214124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.214149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.214283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.214309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.214510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.214537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.214716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.214744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.214967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.214994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.215154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.215180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.215343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.215368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.215525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.215550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.215759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.215787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1263060 00:33:48.803 [2024-07-13 15:45:19.215994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.216021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1263060 00:33:48.803 [2024-07-13 15:45:19.216157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.216184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1263060 ']' 00:33:48.803 [2024-07-13 15:45:19.216342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.216386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.803 [2024-07-13 15:45:19.216593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.216622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.803 [2024-07-13 15:45:19.216807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.803 [2024-07-13 15:45:19.216833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.803 [2024-07-13 15:45:19.217064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.217094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.217308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.217335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.803 [2024-07-13 15:45:19.217521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.803 [2024-07-13 15:45:19.217546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.803 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.217706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.217731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.217877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.217904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.218047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.218073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.218212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.218237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.218398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.218426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.218614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.218639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.218803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.218828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.219017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.219044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.219202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.219227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.219419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.219444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.219634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.219659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.219845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.219880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.220018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.220044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.220175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.220200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.220331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.220358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.220546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.220573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.220732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.220758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.220947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.220973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.221116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.221141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.221330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.221355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.221540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.221565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.221703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.221730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.221914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.221949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.222106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.222131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.222292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.222318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.222479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.222504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.222686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.222711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.222893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.222921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.223062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.223088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.223244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.223270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.223436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.223462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.223596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.223622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.804 qpair failed and we were unable to recover it. 00:33:48.804 [2024-07-13 15:45:19.223783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.804 [2024-07-13 15:45:19.223809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.223979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.224005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.224165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.224191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.224349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.224374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.224542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.224568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.224730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.224757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.224905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.224932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.225073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.225099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.225240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.225266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.225422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.225447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.225586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.225613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.225770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.225795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.225981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.226007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.226172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.226199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.226332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.226357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.226513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.226538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.226728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.226753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.226916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.226942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.227079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.227105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.227300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.227326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.227489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.227514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.227680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.227706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.227873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.227899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.228062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.228087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.228244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.228269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.228460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.228485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.228618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.228643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.228803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.228828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.228974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.229001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.229162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.229187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.229372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.229398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.229564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.229591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.805 [2024-07-13 15:45:19.229750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.805 [2024-07-13 15:45:19.229775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.805 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.229909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.229935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.230099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.230125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.230261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.230287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.230425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.230451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.230578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.230603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.230765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.230790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.230944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.230970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.231161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.231187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.231374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.231400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.231582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.231607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.231790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.231815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.231951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.231978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.232142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.232168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.232350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.232375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.232503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.232528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.232667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.232693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.232859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.232896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.233028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.233054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.233212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.233237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.233372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.233397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.233583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.233608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.233763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.233788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.233952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.233978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.234111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.234137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.234327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.234356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.234494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.234519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.234703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.234729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.234884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.234910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.235071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.235096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.235258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.235283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.235411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.235436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.235567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.806 [2024-07-13 15:45:19.235592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.806 qpair failed and we were unable to recover it. 00:33:48.806 [2024-07-13 15:45:19.235747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.235772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.235936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.235962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.236097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.236122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.236321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.236346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.236502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.236527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.236698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.236724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.236880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.236906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.237089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.237115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.237284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.237469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.237494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.237620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.237645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.237782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.237806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.237996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.238022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.238178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.238203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.238402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.238427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.238587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.238612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.238738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.238763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.238936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.238963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.239118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.239144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.239316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.239342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.239474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.239499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.239686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.239712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.239899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.239925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.240061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.240087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.240251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.240277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.240444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.240471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.240606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.240632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.240796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.240821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.240973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.240999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.241145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.241171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.807 qpair failed and we were unable to recover it. 00:33:48.807 [2024-07-13 15:45:19.241306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.807 [2024-07-13 15:45:19.241332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.241500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.241525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.241698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.241728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.241872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.241899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.242053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.242078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.242269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.242294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.242432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.242459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.242629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.242656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.242819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.242844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.243016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.243042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.243223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.243249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.243405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.243430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.243594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.243619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.243749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.243774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.243949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.243976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.244105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.244131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.244265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.244290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.244480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.244506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.244663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.244688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.244849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.244897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.245069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.245095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.245279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.245304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.245429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.245454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.245618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.245643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.245775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.245800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.245997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.246023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.246187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.246214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.246344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.808 [2024-07-13 15:45:19.246370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.808 qpair failed and we were unable to recover it. 00:33:48.808 [2024-07-13 15:45:19.246564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.246589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.246753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.246778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.246978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.247004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.247134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.247159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.247297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.247323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.247485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.247511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.247667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.247692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.247863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.247897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.248030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.248057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.248186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.248211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.248400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.248426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.248563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.248590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.248776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.248801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.248969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.248995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.249156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.249185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.249374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.249400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.249536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.249562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.249723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.249748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.249920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.249946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.250108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.250133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.250265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.250290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.250452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.250479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.250647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.250673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.250858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.250894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.251034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.251060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.251229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.251254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.251419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.251445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.251629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.251654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.251829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.251854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.251997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.252023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.252204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.252229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.252396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.252421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.252551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.252578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.252714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.252738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.252925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.809 [2024-07-13 15:45:19.252951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.809 qpair failed and we were unable to recover it. 00:33:48.809 [2024-07-13 15:45:19.253094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.253119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.253287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.253312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.253470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.253496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.253655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.253680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.253859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.253891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.254059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.254084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.254268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.254293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.254427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.254452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.254583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.254608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.254732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.254757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.254953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.254978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.255145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.255170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.255301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.255326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.255488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.255513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.255702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.255728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.255884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.255910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.256063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.256088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.256227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.256253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.256385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.256411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.256570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.256734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.256759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.256925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.256951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.257118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.257144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.257312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.257337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.257525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.257550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.257709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.257734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.257905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.257931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.258087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.258112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.258274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.258299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.258436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.258462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.258596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.258621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.258807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.258832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.810 [2024-07-13 15:45:19.258973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.810 [2024-07-13 15:45:19.258999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.810 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.259171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.259197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.259388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.259413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.259601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.259627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.259814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.259839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.260030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.260055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.260248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.260273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.260426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.260452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.260618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.260644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.260813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.260838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.261031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.261056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.261226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.261251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.261386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.261411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.261606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.261631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.261804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.261829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.262071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.262097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.262294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.262319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.262445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.262470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.262611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.262593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:48.811 [2024-07-13 15:45:19.262637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.262672] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.811 [2024-07-13 15:45:19.262863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.262894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.263047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.263071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.263268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.263293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.263479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.263504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.263663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.263688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.263827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.263860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.263997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.264023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.264180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.264230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.264411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.264439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.264629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.264656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.264847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.264890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.265058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.265084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.265244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.265270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.811 qpair failed and we were unable to recover it. 00:33:48.811 [2024-07-13 15:45:19.265458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.811 [2024-07-13 15:45:19.265484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.265641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.265667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.265832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.265858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.266028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.266055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.266231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.266258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.266419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.266445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.266588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.266615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.266785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.266817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.266980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.267008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.267183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.267222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.267399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.267426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.267615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.267641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.267796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.267821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.268000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.268027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.268160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.268197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.268362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.268387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.268553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.268578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.268742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.268767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.268940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.268968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.269160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.269186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.269317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.269343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.269538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.269564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.269700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.269725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.269893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.269921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.270057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.270085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.270230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.270257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.270427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.270453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.270641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.270667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.270810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.270836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.271011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.271037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.271176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.812 [2024-07-13 15:45:19.271202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.812 qpair failed and we were unable to recover it. 00:33:48.812 [2024-07-13 15:45:19.271395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.271421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.271597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.271623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.271756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.271781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.271979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.272006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.272138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.272165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.272352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.272377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.272541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.272566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.272724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.272749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.272905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.272931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.273098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.273125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.273284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.273310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.273474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.273501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.273693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.273719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.273908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.273934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.274120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.274146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.274335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.274360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.274497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.274527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.274719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.274745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.274934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.274960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.275123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.275149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.275308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.275334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.275523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.275549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.275735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.275761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.275928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.275954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.276148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.276185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.276344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.276370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.813 qpair failed and we were unable to recover it. 00:33:48.813 [2024-07-13 15:45:19.276536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.813 [2024-07-13 15:45:19.276562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.276731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.276757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.276918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.276944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.277101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.277126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.277294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.277321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.277483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.277509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.277642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.277668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.277801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.277827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.278021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.278047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.278180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.278206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.278370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.278396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.278556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.278583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.278743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.278770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.278937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.278965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.279153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.279179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.279337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.279363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.279534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.279560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.279712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.279739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.279878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.279905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.280095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.280122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.280294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.280320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.280462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.280488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.280631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.280657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.280820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.280845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.281002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.281040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.281194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.281221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.281382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.281408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.281593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.281618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.281756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.281783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.281950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.281976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.282114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.282147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.282320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.282347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.282483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.282509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.282670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.814 [2024-07-13 15:45:19.282696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.814 qpair failed and we were unable to recover it. 00:33:48.814 [2024-07-13 15:45:19.282861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.282892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.283053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.283079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.283238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.283264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.283426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.283452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.283637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.283663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.283800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.283826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.283984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.284012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.284202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.284227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.284383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.284409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.284580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.284605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.284775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.284800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.284940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.284966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.285131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.285157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.285316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.285342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.285529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.285555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.285746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.285772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.285941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.285978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.286122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.286149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.286316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.286344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.286481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.286512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.286709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.286736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.286876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.286908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.287093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.287119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.287290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.287323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.287489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.287516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.287679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.287705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.287873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.287900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.288071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.288097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.288240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.288266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.288404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.288431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.288602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.288629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.288793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.288819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.288985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.289012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.289151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.815 [2024-07-13 15:45:19.289177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.815 qpair failed and we were unable to recover it. 00:33:48.815 [2024-07-13 15:45:19.289339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.289365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.289524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.289550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.289742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.289768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.289946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.289973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.290146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.290173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.290366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.290392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.290554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.290580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.290741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.290767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.290962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.290988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.291119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.291146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.291284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.291310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.291477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.291503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.291659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.291685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.291821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.291848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.292026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.292052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.292212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.292239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.292403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.292430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.292574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.292601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.292786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.292812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.292989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.293015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.293154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.293180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.293347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.293374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.293563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.293589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.293762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.293788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.293975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.294002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.294138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.294164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.294356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.294382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.294573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.294599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.294786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.294812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.294983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.295014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.295177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.295203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.295346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.295372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.295541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.816 [2024-07-13 15:45:19.295567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.816 qpair failed and we were unable to recover it. 00:33:48.816 [2024-07-13 15:45:19.295725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.295751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.295916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.295943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.296110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.296136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.296299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.296325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.296459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.296485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.296653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.296679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.296854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.296885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.297028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.297055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.297222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.297252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.297434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.297464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.297659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.297689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.297911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.297941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.298125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.298154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.817 [2024-07-13 15:45:19.298333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.298364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.298545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.298574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.298757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.298786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.298974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.299003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.299189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.299219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.299426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.299454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.299648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.299677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.300826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.300879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.301062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.301092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.301293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.301321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.301492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.301519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.301682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.301708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.301877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.301904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.302045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.302072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.302311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.302338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.302531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.302559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.302696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.302723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.302744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:48.817 [2024-07-13 15:45:19.302929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.302956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.303159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.303185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.303321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.303348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.303512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.817 [2024-07-13 15:45:19.303538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.817 qpair failed and we were unable to recover it. 00:33:48.817 [2024-07-13 15:45:19.303723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.303749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.303920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.303951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.304116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.304142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.304284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.304310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.304469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.304495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.304684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.304711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.304871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.304897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.305060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.305087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.305253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.305280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.305415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.305441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.305614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.305640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.305803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.305830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.305998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.306038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.306186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.306213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.306376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.306403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.306585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.306800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.306826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.307004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.307030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.307189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.307215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.307355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.307380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.307536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.307563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.307752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.307777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.307950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.308168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.308194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.308348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.308374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.308544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.818 [2024-07-13 15:45:19.308570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.818 qpair failed and we were unable to recover it. 00:33:48.818 [2024-07-13 15:45:19.308704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.308729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.308905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.308931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.309103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.309137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.309327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.309358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.309544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.309573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.309742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.309772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.309976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.310006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.310233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.310272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.310448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.310476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.310666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.310701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.310829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.310855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.311031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.311058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.311195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.311222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.311383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.311410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.311594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.311621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.311784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.311810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.311992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.312020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.312162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.312189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.312327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.312354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.312490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.312516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.313334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.313387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.313563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.313591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.313784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.313811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.313986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.314014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.314206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.314233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.314441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.314468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.314597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.314623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.314775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.314801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.314997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.315024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.315166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.315192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.315376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.315403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.315563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.315590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.315750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.315777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.315955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.819 [2024-07-13 15:45:19.315982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.819 qpair failed and we were unable to recover it. 00:33:48.819 [2024-07-13 15:45:19.316151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.316178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.316378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.316404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.316561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.316587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.316724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.316750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.316923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.316949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.317102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.317129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.317275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.317302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.317436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.317463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.317622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.317654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.317818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.317845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.317996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.318022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.318215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.318241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.318373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.318399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.318536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.318562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.318721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.318747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.318907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.318934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.319095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.319121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.319251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.319277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.319443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.319470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.319643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.319683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.319828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.319856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.320021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.320047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.320254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.320280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.320445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.320470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.320601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.320627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.320791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.320826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.320996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.321022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.321161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.321186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.321350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.321375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.321514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.321545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.321709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.321928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.321955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.820 [2024-07-13 15:45:19.322083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.820 [2024-07-13 15:45:19.322109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.820 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.322249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.322275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.322462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.322488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.322666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.322693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.322864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.322897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.323027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.323052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.323237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.323262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.323426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.323451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.323607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.323632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.323796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.323825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.324005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.324033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.324191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.324217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.324358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.324384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.324548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.324586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.324746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.324774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.324948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.324975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.325137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.325178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.325350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.325375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.325568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.325594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.325729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.325754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.325899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.325926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.326081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.326107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.326254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.326278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.326408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.326433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.326610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.326636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.326789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.326814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.326976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.327002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.327164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.327189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.327358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.327384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.327518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.327543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.327713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.327739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.327904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.327929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.328066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.328091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.328286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.328311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.821 [2024-07-13 15:45:19.328440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.821 [2024-07-13 15:45:19.328465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.821 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.328639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.328665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.328827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.328852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.329029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.329055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.329195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.329221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.329414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.329440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.329627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.329653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.329807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.329832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.329996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.330035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.330184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.330212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.330382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.330408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.330539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.330565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.330741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.330773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.330941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.330968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.331135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.331160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.331302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.331328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.331488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.331514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.331671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.331697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.331874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.331901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.332035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.332061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.332247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.332273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.332434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.332462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.332621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.332651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.332786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.332812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.332987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.333014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.333146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.333176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.333405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.333432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.333568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.333595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.333788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.333815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.333998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.334025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.334177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.334204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.334366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.334395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.334525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.334550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.334708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.334734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.334890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.822 [2024-07-13 15:45:19.334917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.822 qpair failed and we were unable to recover it. 00:33:48.822 [2024-07-13 15:45:19.334996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:48.823 [2024-07-13 15:45:19.335077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.335108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.335284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.335310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.335473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.335499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.335673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.335699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.335907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.335935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.336126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.336152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.336335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.336363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.336549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.336576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.336742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.336768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.336902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.336928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.337064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.337092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.337256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.337283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.337469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.337495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.337640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.337666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.337835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.337861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.338029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.338055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.338185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.338211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.338417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.338445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.338607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.338634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.338793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.338819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.338960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.338987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.339176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.339203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.339364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.339389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.339515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.339541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.339729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.339755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.340565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.340607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.823 qpair failed and we were unable to recover it. 00:33:48.823 [2024-07-13 15:45:19.340876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.823 [2024-07-13 15:45:19.340903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.341039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.341065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.341205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.341231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.341421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.341447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.341586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.341612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.341796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.341822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.341994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.342021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.342190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.342216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.342349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.342375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.342533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.342561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.342732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.342759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.342930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.342957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.343124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.343150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.343291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.343317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.343561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.343591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.343842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.343874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.344012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.344038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.344192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.344218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.344397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.344423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.344587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.344613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.344744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.344771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.344942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.344969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.345110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.345137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.345351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.345378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.345543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.345569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.345786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.345812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.346041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.346067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.346229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.346256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.346427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.346453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.346615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.346641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.346816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.346842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.347038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.347079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.347258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.347285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.348243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.348295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.348546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.824 [2024-07-13 15:45:19.348573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.824 qpair failed and we were unable to recover it. 00:33:48.824 [2024-07-13 15:45:19.348832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.348874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.349046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.349071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.349219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.349243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.349385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.349412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.349550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.349579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.349780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.349805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.349961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.349987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.350154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.350184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.350381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.350406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.350569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.350594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.350760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.350785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.350975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.351001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.351177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.351202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.351342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.351367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.351521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.351546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.351705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.351731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.351915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.351941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.352108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.352133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.352304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.352330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.352511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.352540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.352703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.352733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.352898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.352924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.353088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.353113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.353256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.353282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.353472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.353497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.353660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.353685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.353842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.353881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.354125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.354151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.354321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.354353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.354489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.354514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.354753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.354780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.354974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.355001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.355181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.355206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.825 qpair failed and we were unable to recover it. 00:33:48.825 [2024-07-13 15:45:19.355370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.825 [2024-07-13 15:45:19.355396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.355635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.355661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.355840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.355873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.356037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.356062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.356201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.356228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.356421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.356447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.356614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.356640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.356796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.356821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.356969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.356995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.357162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.357188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.357360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.357385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.357558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.357583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.357744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.357769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.357946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.357973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.358111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.358136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.358302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.358328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.358576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.358601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.358760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.358785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.358955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.358980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.359188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.359213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.359373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.359398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.359557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.359582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.359723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.359748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.359911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.359937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.360096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.360120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.360290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.360315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.360444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.360473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.360667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.360691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.360822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.360846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.361018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.361044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.361205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.361230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.361398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.361422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.361586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.361610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.826 qpair failed and we were unable to recover it. 00:33:48.826 [2024-07-13 15:45:19.361769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.826 [2024-07-13 15:45:19.361796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.361960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.361987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.362154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.362181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.362344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.362369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.362528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.362553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.362713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.362738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.362905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.362930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.363099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.363123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.363295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.363321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.363459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.363488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.363647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.363672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.363828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.363853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.364021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.364047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.364204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.364230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.364401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.364426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.364605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.364629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.364767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.364793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.364967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.364994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.365130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.365163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.365329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.365353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.365521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.365546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.365709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.365735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.365929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.365955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.366115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.366140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.366281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.366307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.366433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.366458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.366646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.366671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.366807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.366833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.366990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.367017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.367156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.367182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.367335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.367361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.367498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.367523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.367659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.827 [2024-07-13 15:45:19.367684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.827 qpair failed and we were unable to recover it. 00:33:48.827 [2024-07-13 15:45:19.367847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.367885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.368031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.368056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.368218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.368243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.368405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.368441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.368627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.368652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.368818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.368843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.368980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.369006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.369168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.369193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.369366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.369392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.369575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.369601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.369757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.369790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.369955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.369982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.370141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.370167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.370334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.370359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.370493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.370518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.370676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.370705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.371594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.371632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.371775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.371801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.371996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.372022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.372167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.372191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.372329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.372354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.372516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.372541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.372679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.372705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.372875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.372900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.373040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.373065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.373228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.373254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.373445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.373470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.373634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.373659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.373844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.373877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.374013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.374039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.374223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.374248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.828 [2024-07-13 15:45:19.374385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.828 [2024-07-13 15:45:19.374409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.828 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.374551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.374578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.374715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.374740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.374905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.374931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.375119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.375145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.375282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.375307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.375481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.375506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.375662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.375686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.375849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.375881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.376044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.376074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.376207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.376232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.376419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.376445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.376573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.376598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.376731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.376755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.376930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.376957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.377118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.377144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.377314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.377338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.377504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.377529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.377723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.377748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.377902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.378065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.378090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.378225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.378250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.378412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.378439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.378600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.378626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.378757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.378782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.378952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.378978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.379112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.379136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.829 [2024-07-13 15:45:19.379290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.829 [2024-07-13 15:45:19.379327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.829 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.379487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.379512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.379694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.379720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.379860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.379903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.380078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.380103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.380254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.380281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.380442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.380468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.380634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.380659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.380825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.380851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.381000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.381025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.381226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.381251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.381417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.381441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.381634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.381658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.381816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.381841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.381983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.382009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.382147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.382173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.382367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.382392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.382538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.382564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.382728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.382753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.382892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.382917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.383059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.383085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.383232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.383258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.383426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.383455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.383591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.383617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.383746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.383770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.383915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.383941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.384102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.384127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.384267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.384292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.384480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.384505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.384631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.384655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.384845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.384876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.385018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.830 [2024-07-13 15:45:19.385043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.830 qpair failed and we were unable to recover it. 00:33:48.830 [2024-07-13 15:45:19.385184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.385208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.385339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.385364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.385524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.385548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.385708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.385732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.385905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.385931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.386090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.386116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.386264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.386290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.386464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.386490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.386676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.386702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.386839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.386897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.387035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.387061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.387219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.387246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.387441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.387467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.387633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.387659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.387791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.387817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.387996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.388024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.388167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.388193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.388361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.388386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.388550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.388575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.388748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.388773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.388946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.388974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.389142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.389167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.389333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.389359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.389522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.389548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.389736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.389760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.389931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.389957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.390089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.390117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.390260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.390285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.390469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.390494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.390659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.390684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.390877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.390908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.391099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.391125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.391299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.391323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.831 [2024-07-13 15:45:19.391490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.831 [2024-07-13 15:45:19.391516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.831 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.391658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.391684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.391847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.391887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.392048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.392073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.392240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.392264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.392393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.392419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.392653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.392680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.392884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.392910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.393052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.393078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.393241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.393266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.393512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.393537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.393689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.393715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.393879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.393905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.394053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.394077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.394205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.394234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.394409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.394434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.394624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.394649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.394810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.394835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.394998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.395024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.395161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.395188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.395327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.395352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.395548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.395573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.395738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.395764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.395956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.395982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.396125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.396150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.396311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.396336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.396498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.396524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.396699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.396724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.396911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.396937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.397123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.397148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.397304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.397328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.397487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.397514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.397708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.397734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.832 qpair failed and we were unable to recover it. 00:33:48.832 [2024-07-13 15:45:19.397900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.832 [2024-07-13 15:45:19.397926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.398086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.398112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.398273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.398299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.398480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.398506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.398666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.398694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.398876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.398903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.399043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.399068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.399268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.399293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.399468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.399494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.399625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.399650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.399827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.399861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.400032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.400058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.400233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.400258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.400416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.400441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.400617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.400644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.400831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.400857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.401002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.401027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.401175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.401201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.401366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.401393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.401563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.401588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.401749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.401773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.401925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.401952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.402114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.402141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.402275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.402308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.402442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.402466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.402630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.402656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.402810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.402836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.403031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.403058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.403213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.403238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.403378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.403404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.403545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.403571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.403736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.403765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.403949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.403974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.404136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.833 [2024-07-13 15:45:19.404162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.833 qpair failed and we were unable to recover it. 00:33:48.833 [2024-07-13 15:45:19.404322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.404348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.404537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.404564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.404695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.404721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.404856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.404889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.405040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.405065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.405225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.405259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.405393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.405419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.405582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.405617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.405763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.405787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.405941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.405967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.406103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.406129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.406337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.406373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.406510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.406536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.406669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.406695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.406861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.406896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.407061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.407087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.407247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.407273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.407411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.407436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.407597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.407623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.407783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.407810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.407989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.408016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.408175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.408200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.408359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.408384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.408520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.408546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.408748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.408774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.834 qpair failed and we were unable to recover it. 00:33:48.834 [2024-07-13 15:45:19.408948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.834 [2024-07-13 15:45:19.408974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.409135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.409161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.409320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.409345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.409504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.409530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.409689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.409715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.409908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.409934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.410098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.410123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.410265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.410291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.410458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.410483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.410623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.410650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.410786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.410812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.410981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.411009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.411170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.411209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.411374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.411399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.411560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.411586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.411747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.411774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.411921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.411947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.412107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.412133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.412288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.412314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.412474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.412501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.412665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.412691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.412822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.412847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.413013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.413039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.413197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.413223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.413364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.413394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.413523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.413549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.413698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.413725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.413913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.413939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.414095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.414120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.414273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.414300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.414438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.414475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.414633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.414658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.835 qpair failed and we were unable to recover it. 00:33:48.835 [2024-07-13 15:45:19.414816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.835 [2024-07-13 15:45:19.414841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.415010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.415037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.415165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.415190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.415319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.415344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.415502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.415527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.415683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.415709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.415846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.415886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.416050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.416076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.416214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.416239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.416406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.416433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.416594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.416619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.416749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.416774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.416914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.416940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.417099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.417125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.417298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.417324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.417463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.417488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.417623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.417649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.417810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.417837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.418016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.418042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.418181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.418206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.418371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.418399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.418552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.418576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.418706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.418731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.418896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.418922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.419085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.419109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.419238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.419263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.419420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.419445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.419607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.419633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.419763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.419790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.419953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.419979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.420116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.420140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.420301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.420326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.420484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.420510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.420673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.420698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.836 qpair failed and we were unable to recover it. 00:33:48.836 [2024-07-13 15:45:19.420872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.836 [2024-07-13 15:45:19.420898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.421063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.421088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.421257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.421284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.421424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.421449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.421613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.421640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.421795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.421821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.421995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.422021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.422206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.422231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.422393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.422420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.422579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.422604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.422763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.422787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.422976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.423002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.423169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.423194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.423362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.423389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.423525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.423550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.423714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.423738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.423928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.423954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.424093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.424118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.424285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.424309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.424436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.424461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.424621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.424646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.424785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.424810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.424972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.424998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.425159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.425185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.425348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.425374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.425509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.425533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.425665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.425696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.425824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.425849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.426017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.426043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.426176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.426202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.426332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.426357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.426485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.426510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.426704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.426729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.426893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.426919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.427081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.427106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.837 [2024-07-13 15:45:19.427273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.837 [2024-07-13 15:45:19.427298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.837 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.427457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.427482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.427646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.427834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.427859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.428006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.428031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.428176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.428202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.428363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.428389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.428528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.428553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.428712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.428739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.428879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.428905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.429065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.429090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.429218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.429242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.429427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.429452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.429591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.429618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.429742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.429766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.429906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.429932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.430138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.430163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.430297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.430324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.430460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.430486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.430619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.430645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.430770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.430946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.838 [2024-07-13 15:45:19.430959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.430982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.838 [2024-07-13 15:45:19.430986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 [2024-07-13 15:45:19.430998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.431010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.838 [2024-07-13 15:45:19.431021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.838 [2024-07-13 15:45:19.431124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.431149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.431116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:48.838 [2024-07-13 15:45:19.431176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:48.838 [2024-07-13 15:45:19.431229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:48.838 [2024-07-13 15:45:19.431313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.431231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:48.838 [2024-07-13 15:45:19.431343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.431496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.431521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.431655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.431681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.431845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.431876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.432043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.432067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.432207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.432233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.432408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.432434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.432571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.432595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.838 qpair failed and we were unable to recover it. 00:33:48.838 [2024-07-13 15:45:19.432729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.838 [2024-07-13 15:45:19.432755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.432903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.433092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.433116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.433253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.433278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.433420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.433446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.433576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.433600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.433745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.433770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.433927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.433953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.434084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.434258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.434283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.434428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.434458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.434638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.434663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.434798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.434823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.434981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.435007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.435136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.435162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.435299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.435324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.435478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.435658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.435682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.435814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.435840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.435982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.436007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.436142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.436167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.436328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.436354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.436483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.436507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.436694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.436718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.436892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.436918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.437086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.437112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.437263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.437288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.437457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.437483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.437630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.437655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.839 [2024-07-13 15:45:19.437782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.839 [2024-07-13 15:45:19.437806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.839 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.437955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.437981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.438139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.438165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.438316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.438340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.438498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.438523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.438679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.438704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.438860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.438890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.439035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.439060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.439215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.439241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.439399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.439423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.439562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.439587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.439751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.439776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.439972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.439998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.440131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.440157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.440326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.440482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.440507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.440690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.440715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.440873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.440899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.441062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.441089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.441236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.441262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.441421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.441446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.441581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.441610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.441736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.441762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.441951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.441977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.442139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.442163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.442303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.442328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.442501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.442526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.442664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.442688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.442828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.442861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.443048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.443074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.443198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.443222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.443453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.443478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.443616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.443642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.443794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.443820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.443970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.840 [2024-07-13 15:45:19.443995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.840 qpair failed and we were unable to recover it. 00:33:48.840 [2024-07-13 15:45:19.444164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.444190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.444343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.444368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.444510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.444536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.444663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.444687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.444848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.444879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.445011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.445036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.445171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.445195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.445344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.445369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.445529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.445554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.445681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.445705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.445864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.445896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.446017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.446042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.446202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.446228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.446414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.446456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.446669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.446701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.446859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.446896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.447068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.447094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.447235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.447260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.447425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.447450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.447584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.447610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.447766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.447792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.447935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.447961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.448085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.448110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.448242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.448268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.448396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.448420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.448571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.448595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.448729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.448902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.448928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.449055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.449079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.449222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.449246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.449380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.449406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.449549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.449740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.841 [2024-07-13 15:45:19.449765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.841 qpair failed and we were unable to recover it. 00:33:48.841 [2024-07-13 15:45:19.449891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.449917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.450080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.450106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.450285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.450310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.450469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.450494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.450684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.450710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.450876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.450902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.451037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.451062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.451297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.451323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.451453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.451479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.451644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.451669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.451815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.451841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.452017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.452043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.452172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.452197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.452357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.452382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.452539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.452565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.452693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.452720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.452870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.453042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.453068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.453207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.453232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.453362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.453387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.453530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.453556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.453744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.453769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.453898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.453924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.454063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.454089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.454240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.454265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.454402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.454427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.454597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.454624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.454781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.454807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.455012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.455037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.455196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.455222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.455354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.455380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.455510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.455537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.455677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.455702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.842 [2024-07-13 15:45:19.455898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.842 [2024-07-13 15:45:19.455925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.842 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.456094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.456120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.456259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.456284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.456446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.456472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.456596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.456621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.456793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.456818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.456963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.456989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.457151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.457177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.457310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.457335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.457470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.457495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.457656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.457682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.457863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.457895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.458032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.458058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.458190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.458222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.458397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.458422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.458586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.458612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.458752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.458778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.458946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.458973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.459137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.459162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.459302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.459328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.459462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.459488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.459647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.459673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.459804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.459830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.459995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.460020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.460185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.460210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.460371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.460396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.460557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.460583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.460717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.460746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.460880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.460907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.461070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.461095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.461238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.461265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.461428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.461458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.461601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.461627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.461760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.461797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.843 [2024-07-13 15:45:19.461940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.843 [2024-07-13 15:45:19.461965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.843 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.462105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.462131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.462294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.462321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.462484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.462510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.462671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.462696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.462830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.462857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.462993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.463019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.463161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.463186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.463336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.463361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.463489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.463514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.463640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.463665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.463826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.463850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.464011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.464035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.464203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.464228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.464381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.464407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.464583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.464607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.464795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.464820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.464971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.464997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.465154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.465179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.465312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.465337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.465479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.465505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.465654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.465680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.465805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.465830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.466001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.466028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.466190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.466216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.844 [2024-07-13 15:45:19.466355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.844 [2024-07-13 15:45:19.466380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.844 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.466535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.466562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.466704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.466730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.466940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.466966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.467098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.467123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.467244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.467270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.467407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.467431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.467561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.467587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.467743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.467772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.467905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.467930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.468099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.468125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.468267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.468294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.468432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.468456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.468618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.468644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.468802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.468828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.468966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.468992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.469155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.469180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.469319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.469344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.469478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.469503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.469632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.469656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.469815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.469839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.469988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.470013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.470150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.470175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.470348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.470373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.470514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.470539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.470669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.470694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.470833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.470858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.470997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.471022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.471213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.471237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.471406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.471431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.471553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.471578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.471714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.471738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.471891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.471917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.472082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.472106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.472270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.845 [2024-07-13 15:45:19.472294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.845 qpair failed and we were unable to recover it. 00:33:48.845 [2024-07-13 15:45:19.472428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.472453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.472579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.472604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.472758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.472782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.472909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.472935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.473074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.473098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.473251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.473276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.473452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.473477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.473648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.473673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.473830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.473854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.473994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.474018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.474180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.474206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.474350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.474374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.474530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.474555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.474688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.474717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.474886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.474912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.475076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.475101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.475316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.475341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.475472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.475497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.475633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.475658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.475853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.475887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.476077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.476101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.476235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.476260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.476404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.476428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.476557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.476582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.476719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.476744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.476909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.476935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.477123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.477147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.477287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.477312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.477448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.477473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.477633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.477658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.477809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.477835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.477980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.478006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.478256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.478281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.478435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.478460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.478620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.478645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.846 qpair failed and we were unable to recover it. 00:33:48.846 [2024-07-13 15:45:19.478775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.846 [2024-07-13 15:45:19.478800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.478958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.478983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.479126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.479151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.479310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.479335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.479473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.479498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.479660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.479685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.479815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.479840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.479974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.479999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.480130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.480154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.480289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.480314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.480441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.480465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.480623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.480648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.480791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.480818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.480953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.480979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.481129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.481155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.481285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.481310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.481484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.481510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.481667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.481691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.481818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.481848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.481990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.482016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.482155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.482179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.482342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.482367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.482522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.482547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.482673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.482699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.482835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.482860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.483025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.483050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.483220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.483245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.483409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.483434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.483568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.483593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.483749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.483773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.483906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.483932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.484060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.484086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.484235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.484262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.484447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.484473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.484632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.484658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.484792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.484816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.484999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.485037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.847 [2024-07-13 15:45:19.485177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.847 [2024-07-13 15:45:19.485205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.847 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.485345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.485372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.485526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.485552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.485701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.485727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.485895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.485922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.486079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.486114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.486271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.486296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.486432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.486457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.486584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.486610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.486748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.486773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.486909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.486934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.487095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.487120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.487275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.487300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.487428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.487453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.487589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.487615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.487753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.487777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.487936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.487962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.488108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.488133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.488302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.488327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.488488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.488513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.488656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.488681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.488836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.488861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.489052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.489077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.489248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.489273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.489461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.489486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.489619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.489644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.489791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.489816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.489952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.489978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.490109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.490134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.490267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.490292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.490478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.490503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.490630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.490654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.490811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.490836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.491028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.491053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.491185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.491210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.848 [2024-07-13 15:45:19.491372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.848 [2024-07-13 15:45:19.491401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.848 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.491523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.491549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.491708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.491732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.491872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.491898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.492085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.492110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.492251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.492276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.492410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.492435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.492658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.492682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.492827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.492852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.493036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.493061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.493197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.493222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.493438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.493462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.493633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.493658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.493843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.493875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.494033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.494058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.494241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.494266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.494412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.494437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.494579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.494604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.494760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.494785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.494929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.494955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.495121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.495148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.495303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.495328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.495487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.495513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.495638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.495664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.495798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.495823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.495997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.496022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.496150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.496175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.496313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.496342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.496496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.496522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.496678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.849 [2024-07-13 15:45:19.496703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.849 qpair failed and we were unable to recover it. 00:33:48.849 [2024-07-13 15:45:19.496860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.496890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.497023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.497047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.497174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.497199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.497342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.497368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.497530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.497555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.497711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.497736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.497924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.497949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.498076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.498101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.498262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.498288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.498420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.498445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.498588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.498614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.498768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.498793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.498939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.498965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.499133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.499157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.499317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.499342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.499482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.499507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.499645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.499669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.499799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.499824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.500006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.500032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.500201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.500226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.500379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.500404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.500551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.500745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.500770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.500930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.500956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.501087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.501112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.501253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.501278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.501405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.501430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.501590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.501617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.501771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.501797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.501936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.501962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.502094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.502119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.502249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.502274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.502427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.502452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.502611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.502762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.502786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.502931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.502957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.503104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.850 [2024-07-13 15:45:19.503129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.850 qpair failed and we were unable to recover it. 00:33:48.850 [2024-07-13 15:45:19.503268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.503293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.503425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.503593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.503618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.503754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.503779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.503935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.503960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.504097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.504122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.504353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.504378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.504538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.504563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.504699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.504724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.504854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.504884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.505007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.505032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.505203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.505228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.505353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.505378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.505513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.505538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.505701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.505726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.505870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.505895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.506028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.506053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.506224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.506249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.506374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.506399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.506556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.506581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.506737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.506762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.506965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.506991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.507154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.507179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.507312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.507337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.507472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.507497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.507655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.507680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.507840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.507871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.508002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.508027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.508170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.508199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.508363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.508388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.508518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.508545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.508678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.508704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.508842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.508885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.509026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.509051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.509184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.509335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.509360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.509486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.509511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.509695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.509720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.509860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.509891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.510022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.510048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.510207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.851 [2024-07-13 15:45:19.510233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.851 qpair failed and we were unable to recover it. 00:33:48.851 [2024-07-13 15:45:19.510357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.510382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.510531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.510556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.510710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.510735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.510860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.510892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.511039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.511064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.511204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.511228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.511361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.511386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.511545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.511570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.511693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.511718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.511840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.511871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.512068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.512093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.512248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.512273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.512400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.512425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.512583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.512607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.512732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.512757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.512907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.512940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.513067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.513092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.513254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.513279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.513414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.513439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.513601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.513626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.513750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.513775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.513932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.513958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.514119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.514151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.514291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.514317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.514466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.514491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.514648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.514673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.514829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.514853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.514993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.515018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.515144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.515173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.515311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.515336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.515519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.515544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.515674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.515699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.515857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.515888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.516045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.516070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.516195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.516220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.516376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.516401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.516561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.516586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.516748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.516773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.516945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.516971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.517102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.517127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.517250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.517275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.517430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.517455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.517593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.517618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.517786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.517811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.517959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.517985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.518111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.852 [2024-07-13 15:45:19.518136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.852 qpair failed and we were unable to recover it. 00:33:48.852 [2024-07-13 15:45:19.518257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.518282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.518409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.518434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.518602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.518627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.518810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.518835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.518966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.518992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.519124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.519148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.519286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.519311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.519437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.519462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.519595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.519620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.519781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.519808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.519982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.520007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.520143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.520168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.520336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.520362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.520514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.520539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.520660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.520685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.520811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.520836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.520975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.521129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.521277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.521455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.521618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.521783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.521970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.521996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.522157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.522181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.522302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.522327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.522461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.522486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.522610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.522635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.522764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.522789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.522926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.522952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.523136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.523161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.523294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.523318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.523453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.523478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.523621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.523646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.523767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.523792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.523964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.523990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.524117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.524142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.524300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.524325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.524504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.524529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.524656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.524681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.524812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.524838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.524968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.524993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.525144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.525169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.525326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.525351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.525507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.525531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.853 qpair failed and we were unable to recover it. 00:33:48.853 [2024-07-13 15:45:19.525657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.853 [2024-07-13 15:45:19.525682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.525835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.525859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.526020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.526045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.526176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.526203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.526361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.526386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.526559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.526584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.526756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.526785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.526925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.526951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.527078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.527103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.527239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.527264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.527455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.527480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.527649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.527674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.527825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.527850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.527986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.528010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.528147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.528172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.528305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.528330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.528517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.528541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.528666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.528691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.528863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.528894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.529074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.529099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.529243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.529268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.529409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.529434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.529568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.529595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.529735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.529760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.529935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.529960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.854 [2024-07-13 15:45:19.530118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.854 [2024-07-13 15:45:19.530143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.854 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.530272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.530296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.530423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.530448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.530584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.530609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.530758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.530783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.530950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.530976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.531132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.531157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.531326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.531351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.531505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.531530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.531661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.531687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.531820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.531845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.532013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.532038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.532189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.532214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.532347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.532372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.532497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.532522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.532642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.532668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.532836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.532860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.533008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.533033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.533160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.533185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.533314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.533339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.533498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.533523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.533678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.533702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.533863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.533893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.534041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.534067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.534198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.534222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.534460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.534485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.534608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.534633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.534798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.534823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.534972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.534997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.535172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.535197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.535353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.535378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.535615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.535640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.535797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.535822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.535973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.535998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.536125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.536150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.536306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.536331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.536474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.536499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.536651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.536676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.536825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.536850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.537043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.537068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.537205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.537230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.855 qpair failed and we were unable to recover it. 00:33:48.855 [2024-07-13 15:45:19.537372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.855 [2024-07-13 15:45:19.537397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.537517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.537542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.537712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.537737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.537881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.537907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.538062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.538087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.538265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.538294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.538451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.538478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.538648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.538674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.538842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.538887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.539129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.539154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.539323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.539348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.539485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.539510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.539654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.539698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.539848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.539888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.540042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.540067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.540211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.540236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.540377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.540402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.540556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.540582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.540724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.540756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.540897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.540929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.541064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.541089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.541227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.541253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.541420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.541457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.541603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.541629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.541763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.541789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.541971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.541997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.542142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.542175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.542316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.542349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.542496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.542522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:48.856 [2024-07-13 15:45:19.542682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.856 [2024-07-13 15:45:19.542707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:48.856 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.542838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.542863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.543045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.543071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.543208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.543236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.543421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.543468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.543651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.543688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.543875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.543911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.544079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.544115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.544273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.544310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.544479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.544512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.544661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.544693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.544854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.544897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.545037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.545069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.545252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.545284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.545434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.545471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.545644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.545679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.545825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.545858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.546049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.546083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.546252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.546288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.546454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.546487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.546663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.546703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.124 qpair failed and we were unable to recover it. 00:33:49.124 [2024-07-13 15:45:19.546897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.124 [2024-07-13 15:45:19.546933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.547090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.547129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.547329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.547362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.547561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.547594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.547744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.547780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.547962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.547999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.548188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.548224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.548382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.548419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.548580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.548618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.548814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.548845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.549009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.549040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.549239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.549269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.549424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.549454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.549622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.549652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.549817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.549846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.550012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.550041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.550193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.550226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.550402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.550431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.550625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.550655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.550803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.550832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.551057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.551086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.551261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.551290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.551454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.551483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.551644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.551672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.551841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.551879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.552047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.552077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.552319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.552361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.552514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.552548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.552747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.552782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.552965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.553001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.553188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.553225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.553391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.553426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.553640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.553676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.553864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.553909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.554103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.554139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.554308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.554344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.554524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.554558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.554768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.554803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.554971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.555005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.555189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.555224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.125 [2024-07-13 15:45:19.555385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.125 [2024-07-13 15:45:19.555422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.125 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.555614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.555649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.555818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.555853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.556058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.556092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.556248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.556281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.556441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.556478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.556651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.556686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.556861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.556908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.557063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.557099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.557247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.557284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.557466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.557647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.557677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.557863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.557898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.558051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.558085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.558297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.558326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.558476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.558505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.558682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.558711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.558922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.558951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.559100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.559129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.559319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.559347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.559523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.559551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.559706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.559735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.559891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.559921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.560110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.560139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.560292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.560322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.560493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.560522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.560682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.560711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.560905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.560936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7024000b90 with addr=10.0.0.2, port=4420 00:33:49.126 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:49.126 [2024-07-13 15:45:19.561094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.561128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:49.126 [2024-07-13 15:45:19.561316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:49.126 [2024-07-13 15:45:19.561343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.126 [2024-07-13 15:45:19.561501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.561528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.561682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.561707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.561835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.561861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.562024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.562050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.562211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.562238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.562371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.562396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.562524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.562549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.562691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.562716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.562845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.126 [2024-07-13 15:45:19.562893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.126 qpair failed and we were unable to recover it. 00:33:49.126 [2024-07-13 15:45:19.563069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.563093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.563243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.563269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.563437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.563462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.563590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.563615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.563748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.563773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.563932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.563958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.564104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.564129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.564304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.564330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.564486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.564511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.564649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.564674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.564804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.564829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.564969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.564994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.565232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.565258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.565403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.565428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.565584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.565609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.565845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.565888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.566017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.566042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.566171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.566196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.566350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.566375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.566507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.566534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.566691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.566717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.566957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.566983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.567141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.567178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.567331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.567356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.567501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.567527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.567648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.567673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.567833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.567888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.568028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.568054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.568242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.568269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.568446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.568472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.568610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.568636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.568820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.568846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.568993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.569019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.569170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.569195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.569369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.569407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.569533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.569558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.569705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.569731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.569894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.569921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.570080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.570106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.570244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.570270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.127 [2024-07-13 15:45:19.570411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.127 [2024-07-13 15:45:19.570436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.127 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.570594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.570619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.570748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.570773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.570903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.570929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.571087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.571112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.571241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.571267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.571413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.571439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.571597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.571622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.571760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.571785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.571980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.572006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.572246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.572271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.572427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.572453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.572614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.572639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.572794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.572819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.572962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.572988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.573113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.573138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.573269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.573295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.573422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.573446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.573589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.573614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.573757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.573782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.573926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.573951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.574082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.574107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.574264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.574290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.574420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.574446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.574629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.574654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.574797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.574822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.575067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.575093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.575239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.575270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.575510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.575535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.575675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.575700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.575855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.575895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.576045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.576070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.576208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.128 [2024-07-13 15:45:19.576233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.128 qpair failed and we were unable to recover it. 00:33:49.128 [2024-07-13 15:45:19.576406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.576431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.576570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.576596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.576791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.576816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.576967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.576992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.577123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.577148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.577280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.577306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.577448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.577473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.577633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.577658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.129 [2024-07-13 15:45:19.577800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.577827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:49.129 [2024-07-13 15:45:19.577992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.129 [2024-07-13 15:45:19.578018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.129 [2024-07-13 15:45:19.578175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.578201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.578326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.578351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.578508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.578534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.578660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.578685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.578820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.578845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.579007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.579032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.579190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.579215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.579349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.579375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.579567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.579592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.579826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.579851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.580024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.580050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.580210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.580235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.580362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.580387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.580529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.580554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.580679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.580703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.580841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.580873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.581007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.581032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.581175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.581200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.581366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.581391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.581544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.581569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.581717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.581742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.581908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.581934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.582090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.582115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.582278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.582304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.582432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.582457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.582608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.582633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.582760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.582785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.582961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.582986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.583137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.583163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.129 [2024-07-13 15:45:19.583299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.129 [2024-07-13 15:45:19.583324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.129 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.583488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.583513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.583673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.583698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.583848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.583878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.584009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.584035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.584162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.584187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.584345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.584370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.584534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.584561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.584694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.584719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.584880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.584906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.585062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.585087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.585215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.585242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.585375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.585400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.585554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.585579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.585729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.585754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.585888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.585913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.586123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.586148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.586276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.586301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.586433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.586460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.586587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.586612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.586735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.586760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.586899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.586929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.587056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.587082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.587237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.587262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.587421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.587446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.587578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.587603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.587736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.587763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.587937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.587963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.588105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.588130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.588318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.588343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.588480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.588505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.588629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.588654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.588781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.588806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.588937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.588963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.589103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.589130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.589302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.589327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.589469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.589494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.589625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.589649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.589803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.589828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.589968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.589994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.590129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.590154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.130 [2024-07-13 15:45:19.590317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.130 [2024-07-13 15:45:19.590344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.130 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.590480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.590505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.590702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.590727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.590850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.590882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.591029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.591054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.591211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.591237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.591361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.591387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.591518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.591544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.591725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.591751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.591918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.591944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.592094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.592119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.592258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.592283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.592455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.592480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.592705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.592730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.592922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.592948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.593120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.593145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.593285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.593311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.593534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.593559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.593703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.593728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.593890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.593915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.594078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.594103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.594234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.594263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.594394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.594420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.594553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.594578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.594742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.594767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.594903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.594929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.595057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.595082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.595237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.595262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.595393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.595418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.595596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.595620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.595756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.595780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.595912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.595938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.596079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.596103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.596250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.596275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.596404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.596429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.596563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.596588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.596716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.596741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.596910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.596935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.597079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.597105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.597252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.597277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.597418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.597443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.131 [2024-07-13 15:45:19.597605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.131 [2024-07-13 15:45:19.597630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.131 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.597786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.597811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.597971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.597997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.598139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.598165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.598321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.598346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.598482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.598508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.598673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.598699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.598836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.598876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.599022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.599048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.599240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.599264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.599392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.599417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.599552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.599576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.599702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.599727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.599887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.599913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.600067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.600092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.600249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.600274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.600427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.600452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.600585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.600610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.600742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.600766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.600935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.600961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.601089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.601114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.601258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.601283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.601443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.601468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.601592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.601617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.601779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.601804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.601971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.601997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.602127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.602152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.602280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.602305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.602428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.602453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.602614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.602639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.602771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.602796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.602931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.602956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.603082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.603107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.603269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.603293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.603425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 Malloc0 00:33:49.132 [2024-07-13 15:45:19.603450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.603609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.603634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.132 [2024-07-13 15:45:19.603777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.603803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:49.132 [2024-07-13 15:45:19.604000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.604026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.132 [2024-07-13 15:45:19.604151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.604176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.132 [2024-07-13 15:45:19.604354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.132 [2024-07-13 15:45:19.604379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.132 qpair failed and we were unable to recover it. 00:33:49.132 [2024-07-13 15:45:19.604514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.604538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.604698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.604722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.604856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.604887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.605035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.605060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.605185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.605210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.605361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.605386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.605513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.605538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.605734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.605759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.605904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.605930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.606060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.606086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.606255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.606280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.606403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.606428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.606563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.606588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.606773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.606798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.606950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.606975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.607128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.607153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.607180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.133 [2024-07-13 15:45:19.607284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.607308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.607438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.607462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.607622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.607647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.607783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.607807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.607947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.607973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.608118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.608143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.608272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.608297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.608446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.608471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.608598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.608623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.608760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.608784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.608946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.608972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.609108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.609133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.609268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.609293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.609433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.609458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.609582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.609607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.609764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.609789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.609951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.609976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.610104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.610133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.133 [2024-07-13 15:45:19.610271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.133 [2024-07-13 15:45:19.610298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.133 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.610449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.610474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.610636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.610661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.610815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.610841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.610977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.611002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.611136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.611163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.611323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.611348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.611495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.611520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.611649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.611675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.611807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.611832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.612009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.612035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.612181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.612206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.612366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.612391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.612535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.612561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.612719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.612744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.612908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.612935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.613076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.613101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.613235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.613260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.613418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.613443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.613580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.613604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.613761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.613786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.613931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.613956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.614089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.614114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.614284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.614309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.614434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.614459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.614614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.614639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.614767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.614792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.614934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.614959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.615110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.615135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.615299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.615324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.134 [2024-07-13 15:45:19.615445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.615470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:49.134 [2024-07-13 15:45:19.615639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.615664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.134 [2024-07-13 15:45:19.615819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.134 [2024-07-13 15:45:19.615844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.615991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.616017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.616165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.616189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.616326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.616351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.616502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.616527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.616678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.616703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.616829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.616858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.134 [2024-07-13 15:45:19.617031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.134 [2024-07-13 15:45:19.617057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.134 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.617194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.617219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.617351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.617377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.617532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.617557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.617717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.617742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.617903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.617929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.618081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.618106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.618293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.618318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.618470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.618495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.618622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.618647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.618791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.618816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.618983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.619009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.619144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.619169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.619356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.619382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.619529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.619554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.619726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.619751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.619886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.619912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.620042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.620067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.620238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.620263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.620399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.620424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.620545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.620570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.620726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.620751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.620918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.620944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.621081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.621116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.621292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.621318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.621505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.621530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.621676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.621701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.621830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.621855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.622013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.622038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.622195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.622220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.622361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.622385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.622558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.622583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.622709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.622734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.622888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.622914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.623059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.623085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.623223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.623261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.623427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.623453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.135 [2024-07-13 15:45:19.623592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.623617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.135 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.135 [2024-07-13 15:45:19.623776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.623802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.135 [2024-07-13 15:45:19.623956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.135 [2024-07-13 15:45:19.623991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.135 qpair failed and we were unable to recover it. 00:33:49.135 [2024-07-13 15:45:19.624144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.624181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.624326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.624352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.624493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.624518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.624644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.624669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.624804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.624828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.624962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.624988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.625169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.625195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.625360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.625386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.625520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.625545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.625674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.625699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.625826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.625851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.625982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.626007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.626174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.626215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.626390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.626430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.626563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.626590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.626744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.626770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.626952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.626980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.627125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.627151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.627316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.627342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.627501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.627526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.627658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.627683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.627810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.627834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.627975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.628001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.628130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.628154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.628338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.628363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.628499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.628524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.628681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.628706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.628830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.628855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.628995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.629020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.629183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.629208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.629366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.629391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.629541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.629565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.629695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.629720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.629893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.629918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.630049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.630074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.630205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.630229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.630380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.630404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.630561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.630585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.630713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.630737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.630901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.630930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.631052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.136 [2024-07-13 15:45:19.631077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.136 qpair failed and we were unable to recover it. 00:33:49.136 [2024-07-13 15:45:19.631206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.631231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.631372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.631396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.137 [2024-07-13 15:45:19.631530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.631557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.137 [2024-07-13 15:45:19.631690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.631716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.137 [2024-07-13 15:45:19.631886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.631912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.632048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.632073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.632206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.632231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.632361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.632387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.632551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.632577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.632715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.632741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.632886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.632921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.633060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.633087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.633237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.633263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.633402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.633429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7020000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.633604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.633643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.633780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.633807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.633961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.633987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.634118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.634144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.634271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.634296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.634453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.634478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.634633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.634658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.634804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.634829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f2450 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.634976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.635005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.635138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.635171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.635310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.137 [2024-07-13 15:45:19.635336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7018000b90 with addr=10.0.0.2, port=4420 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 [2024-07-13 15:45:19.635465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.137 [2024-07-13 15:45:19.637897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.137 [2024-07-13 15:45:19.638062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.137 [2024-07-13 15:45:19.638090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.137 [2024-07-13 15:45:19.638107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.137 [2024-07-13 15:45:19.638120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.137 [2024-07-13 15:45:19.638157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.137 15:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1262653 00:33:49.137 [2024-07-13 15:45:19.647773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.137 [2024-07-13 15:45:19.647918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.137 [2024-07-13 15:45:19.647946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.137 [2024-07-13 15:45:19.647961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.137 [2024-07-13 15:45:19.647974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.137 [2024-07-13 15:45:19.648004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.137 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.657808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.657955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.657982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.657996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.658009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.658040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.667757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.667902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.667929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.667943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.667956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.667985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.677809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.677970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.677998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.678012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.678025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.678057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.687906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.688039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.688066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.688081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.688094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.688136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.697941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.698094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.698121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.698135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.698148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.698178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.707878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.708031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.708058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.708079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.708093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.708124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.717854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.718000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.718026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.718040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.718053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.718084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.727906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.728083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.728109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.728123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.728136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.728168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.737944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.738084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.738109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.738124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.738137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.738180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.747957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.748111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.748137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.748151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.748164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.748193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.757992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.758125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.758151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.758165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.758179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.758208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.768085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.768219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.768246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.768260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.768273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.768303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.778061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.778190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.778216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.778230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.778243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.778279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.788090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.788228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.138 [2024-07-13 15:45:19.788254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.138 [2024-07-13 15:45:19.788268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.138 [2024-07-13 15:45:19.788281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.138 [2024-07-13 15:45:19.788310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.138 qpair failed and we were unable to recover it. 00:33:49.138 [2024-07-13 15:45:19.798124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.138 [2024-07-13 15:45:19.798268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.798299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.798315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.798328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.798357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.808259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.808392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.808418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.808432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.808445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.808474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.818168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.818309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.818335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.818349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.818362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.818393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.828236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.828402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.828429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.828443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.828456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.828498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.838310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.838448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.838474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.838489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.838502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.838549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.848315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.848469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.848496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.848511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.848527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.848559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.858363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.858490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.858517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.858532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.858545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.858575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.868339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.868520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.868545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.868560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.868573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.868603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.139 [2024-07-13 15:45:19.878380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.139 [2024-07-13 15:45:19.878523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.139 [2024-07-13 15:45:19.878553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.139 [2024-07-13 15:45:19.878573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.139 [2024-07-13 15:45:19.878586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.139 [2024-07-13 15:45:19.878616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.139 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.888398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.888531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.888563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.888578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.888591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.888620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.898406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.898539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.898565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.898580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.898592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.898622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.908447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.908587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.908613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.908633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.908648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.908678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.918511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.918644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.918671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.918685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.918698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.918727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.928499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.928646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.928672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.928686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.928704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.928735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.938580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.938713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.938740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.938754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.938767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.938798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.948545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.948681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.948707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.948722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.948734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.948763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.958572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.958713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.958740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.958754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.958767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.958796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.968656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.968828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.968854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.968879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.968895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.968924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.978615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.978750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.978776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.978790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.978803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.978832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.988664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.988802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.988828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.988842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.988857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.399 [2024-07-13 15:45:19.988895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.399 qpair failed and we were unable to recover it. 00:33:49.399 [2024-07-13 15:45:19.998674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.399 [2024-07-13 15:45:19.998812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.399 [2024-07-13 15:45:19.998838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.399 [2024-07-13 15:45:19.998852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.399 [2024-07-13 15:45:19.998872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:19.998904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.008735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.008911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.008949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.008971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.008990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.009034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.018769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.018963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.019000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.019026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.019056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.019103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.028820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.028990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.029023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.029045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.029067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.029111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.038815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.038973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.039010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.039035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.039055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.039096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.048858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.049014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.049042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.049057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.049070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.049102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.058860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.059005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.059032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.059047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.059060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.059091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.068905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.069086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.069112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.069127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.069140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.069169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.078910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.079088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.079114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.079129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.079141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.079172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.088913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.089043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.089069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.089083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.089096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.089127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.098939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.099072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.099099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.099113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.099126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.099155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.109016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.109199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.109224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.109245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.109259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.109290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.118988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.119126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.119152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.119167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.119179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.119214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.129028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.129174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.129200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.129214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.129227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.129258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.400 [2024-07-13 15:45:20.139070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.400 [2024-07-13 15:45:20.139200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.400 [2024-07-13 15:45:20.139225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.400 [2024-07-13 15:45:20.139239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.400 [2024-07-13 15:45:20.139253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.400 [2024-07-13 15:45:20.139284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.400 qpair failed and we were unable to recover it. 00:33:49.401 [2024-07-13 15:45:20.149102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.401 [2024-07-13 15:45:20.149254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.401 [2024-07-13 15:45:20.149279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.401 [2024-07-13 15:45:20.149293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.401 [2024-07-13 15:45:20.149306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.401 [2024-07-13 15:45:20.149335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.401 qpair failed and we were unable to recover it. 00:33:49.401 [2024-07-13 15:45:20.159276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.401 [2024-07-13 15:45:20.159415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.401 [2024-07-13 15:45:20.159441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.401 [2024-07-13 15:45:20.159455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.401 [2024-07-13 15:45:20.159468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.401 [2024-07-13 15:45:20.159496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.401 qpair failed and we were unable to recover it. 00:33:49.660 [2024-07-13 15:45:20.169163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.169294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.169319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.169333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.169355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.169385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.179241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.179397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.179423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.179437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.179450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.179482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.189249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.189429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.189455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.189469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.189482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.189513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.199228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.199371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.199403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.199418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.199431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.199461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.209265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.209394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.209421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.209435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.209448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.209477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.219266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.219396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.219421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.219435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.219447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.219476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.229455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.229619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.229645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.229659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.229671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.229702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.239344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.239476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.239502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.239516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.239529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.239565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.249382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.249518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.249544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.249558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.249571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.249602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.259413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.259586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.259611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.259625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.259638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.259668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.269465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.269623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.269648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.269661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.269674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.269703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.279537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.279673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.279698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.279713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.279726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.279768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.289494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.289631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.289666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.289681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.289694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.289723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.299507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.661 [2024-07-13 15:45:20.299647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.661 [2024-07-13 15:45:20.299673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.661 [2024-07-13 15:45:20.299687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.661 [2024-07-13 15:45:20.299700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.661 [2024-07-13 15:45:20.299730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.661 qpair failed and we were unable to recover it. 00:33:49.661 [2024-07-13 15:45:20.309564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.309708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.309734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.309748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.309760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.309802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.319570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.319718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.319743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.319757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.319770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.319799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.329600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.329733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.329759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.329773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.329792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.329835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.339628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.339762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.339788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.339802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.339815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.339845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.349681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.349842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.349873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.349889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.349902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.349931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.359672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.359802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.359828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.359841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.359854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.359892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.369786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.369930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.369956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.369970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.369983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.370025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.379731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.379879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.379905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.379921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.379934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.379963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.389771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.389910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.389936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.389950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.389963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.389994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.399810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.399950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.399976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.399990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.400004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.400034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.409832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.409976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.410003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.410018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.410031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.410061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.662 [2024-07-13 15:45:20.419848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.662 [2024-07-13 15:45:20.420000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.662 [2024-07-13 15:45:20.420026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.662 [2024-07-13 15:45:20.420040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.662 [2024-07-13 15:45:20.420058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.662 [2024-07-13 15:45:20.420089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.662 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.429955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.430124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.430153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.430167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.430180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.430209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.439924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.440062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.440088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.440102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.440115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.440145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.450086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.450230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.450256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.450270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.450284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.450312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.460047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.460205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.460231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.460245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.460258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.460288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.470096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.470254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.470280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.470294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.470307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.470336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.480088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.480235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.480262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.480276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.480290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.480333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.490048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.490179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.490205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.490219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.490232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.490263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.500081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.500215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.500241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.500255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.500268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.500300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.510141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.510277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.510303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.510324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.510338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.510369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.520171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.520313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.520338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.520352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.520365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.520395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.530159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.530289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.530314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.530328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.530341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.530371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.540195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.540329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.540355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.540369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.540382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.540411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.550320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.550500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.550528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.550549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.550563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.550594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.560244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.560376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.560403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.560417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.560430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.560460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.570273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.570405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.570431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.570445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.570459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.570487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.580306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.580437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.580463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.580478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.580491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.580520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.590400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.590533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.590559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.590573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.590585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.590615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.600387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.600546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.600577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.600592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.600605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.600634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.610485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.610621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.610647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.610662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.610674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.610703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.620413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.620547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.620573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.620587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.620600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.620631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.630481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.630626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.630651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.630665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.630678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.630707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.640631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.640791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.640816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.640830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.640843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.640885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.650526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.650694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.650719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.650733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.650746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.650777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.660521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.660650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.660676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.660690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.660703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.660733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.670627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.670774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.670800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.670815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.670828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.670858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:49.926 [2024-07-13 15:45:20.680605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:49.926 [2024-07-13 15:45:20.680751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:49.926 [2024-07-13 15:45:20.680775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:49.926 [2024-07-13 15:45:20.680789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:49.926 [2024-07-13 15:45:20.680802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:49.926 [2024-07-13 15:45:20.680832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:49.926 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-13 15:45:20.690747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.187 [2024-07-13 15:45:20.690894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.187 [2024-07-13 15:45:20.690925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.187 [2024-07-13 15:45:20.690940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.187 [2024-07-13 15:45:20.690953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.187 [2024-07-13 15:45:20.690985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-13 15:45:20.700650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.187 [2024-07-13 15:45:20.700815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.187 [2024-07-13 15:45:20.700840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.187 [2024-07-13 15:45:20.700854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.187 [2024-07-13 15:45:20.700874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.187 [2024-07-13 15:45:20.700905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.187 qpair failed and we were unable to recover it. 00:33:50.187 [2024-07-13 15:45:20.710831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.187 [2024-07-13 15:45:20.711007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.187 [2024-07-13 15:45:20.711032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.187 [2024-07-13 15:45:20.711047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.187 [2024-07-13 15:45:20.711059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.187 [2024-07-13 15:45:20.711088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.720726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.720860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.720892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.720907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.720920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.720949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.730730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.730874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.730901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.730915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.730928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.730964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.740762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.740908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.740934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.740948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.740961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.740991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.750815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.750963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.750989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.751003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.751017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.751047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.760826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.760983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.761009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.761023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.761036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.761066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.770840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.770986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.771011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.771025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.771038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.771068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.780892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.781031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.781057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.781071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.781084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.781128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.791016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.791154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.791179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.791193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.791206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.791237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.800979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.801161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.801188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.801201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.801214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.801243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.810978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.811109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.811134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.811148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.811161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.811192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.821013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.821152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.821177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.821191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.821210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.821240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.831064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.831248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.831273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.831287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.831300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.831329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.841091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.841226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.841251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.841264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.841277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.841308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.188 qpair failed and we were unable to recover it. 00:33:50.188 [2024-07-13 15:45:20.851067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.188 [2024-07-13 15:45:20.851197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.188 [2024-07-13 15:45:20.851222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.188 [2024-07-13 15:45:20.851236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.188 [2024-07-13 15:45:20.851249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.188 [2024-07-13 15:45:20.851278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.861104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.861236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.861261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.861275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.861289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.861318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.871147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.871299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.871324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.871338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.871352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.871381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.881199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.881338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.881363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.881377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.881390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.881420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.891188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.891326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.891352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.891366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.891379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.891408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.901194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.901323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.901349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.901362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.901375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.901405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.911339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.911490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.911516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.911536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.911550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.911579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.921302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.921433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.921459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.921473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.921486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.921531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.931349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.931485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.931512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.931529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.931544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.931574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.941343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.941472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.941498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.941512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.941525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.941555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.189 [2024-07-13 15:45:20.951351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.189 [2024-07-13 15:45:20.951500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.189 [2024-07-13 15:45:20.951525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.189 [2024-07-13 15:45:20.951539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.189 [2024-07-13 15:45:20.951552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.189 [2024-07-13 15:45:20.951581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.189 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:20.961391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:20.961522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:20.961548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:20.961563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:20.961576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7018000b90 00:33:50.447 [2024-07-13 15:45:20.961604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:20.971406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:20.971569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:20.971602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:20.971628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:20.971653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:20.971700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:20.981458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:20.981594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:20.981622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:20.981645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:20.981669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:20.981716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:20.991543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:20.991731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:20.991773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:20.991795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:20.991818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:20.991886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.001552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.001693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.001725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.001749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.001788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.001837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.011539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.011711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.011738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.011761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.011784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.011832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.021590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.021739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.021770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.021795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.021833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.021921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.031600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.031745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.031774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.031797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.031823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.031878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.041646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.041788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.041815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.041838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.041861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.041934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.051645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.051811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.051838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.051861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.051895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.051941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.061653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.061786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.061814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.061837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.061861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.061918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.071691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.071880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.071907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.071930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.071953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.072001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.081734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.081878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.081905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.081928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.081951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.081997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.091773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.091916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.091948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.091973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.091997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.092060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.101784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.447 [2024-07-13 15:45:21.101939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.447 [2024-07-13 15:45:21.101967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.447 [2024-07-13 15:45:21.101990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.447 [2024-07-13 15:45:21.102013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.447 [2024-07-13 15:45:21.102060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.447 qpair failed and we were unable to recover it. 00:33:50.447 [2024-07-13 15:45:21.111906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.112073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.112100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.112123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.112147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.112208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.121849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.122028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.122056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.122078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.122100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.122145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.131859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.131994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.132021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.132043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.132066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.132119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.141879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.142021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.142048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.142071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.142096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.142143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.151931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.152080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.152107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.152129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.152154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.152201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.161968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.162113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.162140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.162163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.162188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.162247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.171979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.172118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.172145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.172168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.172191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.172237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.182020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.182155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.182187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.182211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.182235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.182281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.192162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.192323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.192349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.192371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.192396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.192457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.202079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.202210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.202237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.202260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.202283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.448 [2024-07-13 15:45:21.202329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.448 qpair failed and we were unable to recover it. 00:33:50.448 [2024-07-13 15:45:21.212125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.448 [2024-07-13 15:45:21.212257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.448 [2024-07-13 15:45:21.212284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.448 [2024-07-13 15:45:21.212307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.448 [2024-07-13 15:45:21.212331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.212377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.222146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.222330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.222371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.222394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.222423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.222479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.232165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.232302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.232329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.232352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.232376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.232422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.242221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.242359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.242387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.242410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.242433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.242493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.252299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.252434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.252461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.252483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.252507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.252553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.262319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.262455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.262481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.262504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.262527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.262586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.272287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.272437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.272470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.272494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.272515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.272560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.282301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.282433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.282460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.282483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.282507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.282553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.292350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.292496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.292523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.292546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.292569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.292630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.302401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.302580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.302607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.302630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.302655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.302702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.312383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.312557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.312584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.312615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.312639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.312685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.322480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.322656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.322683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.322720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.322742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.322802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.332466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.332635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.332662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.332685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.332710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.332755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.342486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.342660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.342687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.342709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.342732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.342777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.352509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.352658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.352684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.352707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.352730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.352792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.362518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.362658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.362685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.362707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.362732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.362779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.372593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.372768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.372795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.372832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.372855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.372921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.382572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.382708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.382734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.382757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.382782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.705 [2024-07-13 15:45:21.382829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.705 qpair failed and we were unable to recover it. 00:33:50.705 [2024-07-13 15:45:21.392656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.705 [2024-07-13 15:45:21.392822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.705 [2024-07-13 15:45:21.392848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.705 [2024-07-13 15:45:21.392881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.705 [2024-07-13 15:45:21.392908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.392953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.402645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.402777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.402804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.402833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.402857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.402912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.412705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.412890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.412917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.412940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.412966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.413012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.422804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.422953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.422980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.423003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.423027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.423073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.432748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.432899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.432925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.432948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.432972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.433018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.442792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.442982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.443009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.443032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.443055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.443103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.452770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.452912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.452939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.452961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.452987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.453033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.706 [2024-07-13 15:45:21.462903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.706 [2024-07-13 15:45:21.463031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.706 [2024-07-13 15:45:21.463058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.706 [2024-07-13 15:45:21.463080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.706 [2024-07-13 15:45:21.463104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.706 [2024-07-13 15:45:21.463150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.706 qpair failed and we were unable to recover it. 00:33:50.963 [2024-07-13 15:45:21.472896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.963 [2024-07-13 15:45:21.473041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.963 [2024-07-13 15:45:21.473067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.963 [2024-07-13 15:45:21.473089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.963 [2024-07-13 15:45:21.473113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.963 [2024-07-13 15:45:21.473160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.963 qpair failed and we were unable to recover it. 00:33:50.963 [2024-07-13 15:45:21.482889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.963 [2024-07-13 15:45:21.483026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.963 [2024-07-13 15:45:21.483052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.963 [2024-07-13 15:45:21.483075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.963 [2024-07-13 15:45:21.483097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.963 [2024-07-13 15:45:21.483143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.963 qpair failed and we were unable to recover it. 00:33:50.963 [2024-07-13 15:45:21.492904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.963 [2024-07-13 15:45:21.493048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.963 [2024-07-13 15:45:21.493083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.963 [2024-07-13 15:45:21.493107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.963 [2024-07-13 15:45:21.493130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.963 [2024-07-13 15:45:21.493175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.963 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.502928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.503091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.503118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.503140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.503163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.503209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.512963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.513100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.513126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.513148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.513171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.513217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.523007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.523168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.523195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.523218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.523242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.523288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.533020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.533159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.533186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.533208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.533233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.533286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.543085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.543238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.543264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.543287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.543325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.543383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.553087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.553231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.553257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.553280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.553304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.553350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.563155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.563301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.563329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.563352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.563389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.563447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.573219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.573394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.573421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.573444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.573469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.573516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.583183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.583331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.583363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.583388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.583427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.583486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.593218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.593361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.593388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.593411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.593434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.593480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.603210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.603354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.603381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.603403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.603427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.603473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.613256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.613398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.613424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.613446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.613472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.613519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.623280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.623464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.623491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.623514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.623544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.623591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.633314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.633465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.633492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.633516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.964 [2024-07-13 15:45:21.633554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.964 [2024-07-13 15:45:21.633612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.964 qpair failed and we were unable to recover it. 00:33:50.964 [2024-07-13 15:45:21.643322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.964 [2024-07-13 15:45:21.643474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.964 [2024-07-13 15:45:21.643501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.964 [2024-07-13 15:45:21.643524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.643550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.643613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.653364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.653508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.653534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.653557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.653590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.653641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.663420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.663576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.663602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.663625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.663663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.663706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.673482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.673623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.673653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.673676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.673699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.673745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.683420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.683555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.683582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.683604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.683627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.683672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.693585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.693726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.693770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.693794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.693817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.693863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.703503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.703636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.703662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.703685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.703708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.703753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.713523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.713656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.713682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.713705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.713736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.713784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:50.965 [2024-07-13 15:45:21.723531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:50.965 [2024-07-13 15:45:21.723712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:50.965 [2024-07-13 15:45:21.723739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:50.965 [2024-07-13 15:45:21.723761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:50.965 [2024-07-13 15:45:21.723785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:50.965 [2024-07-13 15:45:21.723831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:50.965 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.733649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.733823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.733872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.733900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.733924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.733963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.743592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.743722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.743748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.743762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.743775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.743806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.753686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.753888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.753917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.753932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.753945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.753976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.763656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.763832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.763859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.763883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.763896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.763933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.773696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.773842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.773876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.773901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.773925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.773970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.783702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.783844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.783879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.783904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.783928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.783976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.793781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.793943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.793970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.793993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.794017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.794063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.224 [2024-07-13 15:45:21.803738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.224 [2024-07-13 15:45:21.803921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.224 [2024-07-13 15:45:21.803947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.224 [2024-07-13 15:45:21.803977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.224 [2024-07-13 15:45:21.804001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.224 [2024-07-13 15:45:21.804051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.224 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.813774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.813921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.813948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.813970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.813996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.814044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.823805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.823954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.823981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.824004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.824028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.824075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.833903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.834068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.834095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.834118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.834141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.834188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.843879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.844033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.844060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.844082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.844106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.844153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.853911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.854048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.854075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.854098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.854121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.854169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.863919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.864062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.864089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.864112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.864150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.864195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.874005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.874148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.874175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.874197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.874221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.874266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.884000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.884146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.884173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.884197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.884236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.884304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.894036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.894179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.894210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.894234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.894271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.894328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.904024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.904176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.904203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.904226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.904251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.904297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.914121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.914302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.225 [2024-07-13 15:45:21.914328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.225 [2024-07-13 15:45:21.914351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.225 [2024-07-13 15:45:21.914374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.225 [2024-07-13 15:45:21.914419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.225 qpair failed and we were unable to recover it. 00:33:51.225 [2024-07-13 15:45:21.924170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.225 [2024-07-13 15:45:21.924311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.924338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.924361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.924384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.924444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.226 [2024-07-13 15:45:21.934117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.226 [2024-07-13 15:45:21.934252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.934279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.934302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.934327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.934379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.226 [2024-07-13 15:45:21.944173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.226 [2024-07-13 15:45:21.944357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.944384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.944407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.944430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.944476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.226 [2024-07-13 15:45:21.954273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.226 [2024-07-13 15:45:21.954420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.954446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.954469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.954495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.954541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.226 [2024-07-13 15:45:21.964212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.226 [2024-07-13 15:45:21.964358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.964384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.964408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.964432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.964477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.226 [2024-07-13 15:45:21.974257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.226 [2024-07-13 15:45:21.974395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.974422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.974444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.974469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.974528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.226 [2024-07-13 15:45:21.984288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.226 [2024-07-13 15:45:21.984421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.226 [2024-07-13 15:45:21.984453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.226 [2024-07-13 15:45:21.984477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.226 [2024-07-13 15:45:21.984501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.226 [2024-07-13 15:45:21.984547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.226 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:21.994313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:21.994491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:21.994518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:21.994542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:21.994565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:21.994636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.004310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.004454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.004481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.004504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.004529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.004576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.014402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.014542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.014571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.014596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.014635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.014695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.024396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.024543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.024570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.024593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.024638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.024700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.034408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.034542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.034569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.034592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.034615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.034660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.044423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.044557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.044583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.044607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.044630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.044676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.054484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.054628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.054654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.054681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.054719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.054779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.064507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.064641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.064668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.064690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.486 [2024-07-13 15:45:22.064715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.486 [2024-07-13 15:45:22.064761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.486 qpair failed and we were unable to recover it. 00:33:51.486 [2024-07-13 15:45:22.074541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.486 [2024-07-13 15:45:22.074691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.486 [2024-07-13 15:45:22.074718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.486 [2024-07-13 15:45:22.074741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.074766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.074813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.084602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.084803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.084844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.084874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.084913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.084958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.094594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.094740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.094770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.094794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.094833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.094901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.104643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.104783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.104811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.104834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.104857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.104916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.114731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.114927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.114957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.114981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.115011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.115059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.124697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.124864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.124899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.124921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.124946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.124994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.134777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.134928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.134955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.134978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.135001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.135048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.144752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.144897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.144925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.144948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.144972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.145019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.154829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.154982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.155009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.155032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.155056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7024000b90 00:33:51.487 [2024-07-13 15:45:22.155102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.164808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.164948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.164981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.164997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.165011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.487 [2024-07-13 15:45:22.165042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.174942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.175085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.175112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.175127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.175140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.487 [2024-07-13 15:45:22.175172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.184887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.185019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.185046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.185061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.185074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.487 [2024-07-13 15:45:22.185105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.194931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.195079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.195105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.195120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.195134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.487 [2024-07-13 15:45:22.195176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.204961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.205095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.205121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.205142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.487 [2024-07-13 15:45:22.205157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.487 [2024-07-13 15:45:22.205187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.487 qpair failed and we were unable to recover it. 00:33:51.487 [2024-07-13 15:45:22.215001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.487 [2024-07-13 15:45:22.215152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.487 [2024-07-13 15:45:22.215180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.487 [2024-07-13 15:45:22.215194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.488 [2024-07-13 15:45:22.215207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.488 [2024-07-13 15:45:22.215250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.488 qpair failed and we were unable to recover it. 00:33:51.488 [2024-07-13 15:45:22.225032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.488 [2024-07-13 15:45:22.225168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.488 [2024-07-13 15:45:22.225194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.488 [2024-07-13 15:45:22.225209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.488 [2024-07-13 15:45:22.225221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.488 [2024-07-13 15:45:22.225251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.488 qpair failed and we were unable to recover it. 00:33:51.488 [2024-07-13 15:45:22.235038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.488 [2024-07-13 15:45:22.235171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.488 [2024-07-13 15:45:22.235197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.488 [2024-07-13 15:45:22.235211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.488 [2024-07-13 15:45:22.235224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.488 [2024-07-13 15:45:22.235255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.488 qpair failed and we were unable to recover it. 00:33:51.488 [2024-07-13 15:45:22.245035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.488 [2024-07-13 15:45:22.245165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.488 [2024-07-13 15:45:22.245191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.488 [2024-07-13 15:45:22.245205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.488 [2024-07-13 15:45:22.245218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.488 [2024-07-13 15:45:22.245248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.488 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.255090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.255227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.255254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.255275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.255289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.255321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.265093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.265222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.265248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.265263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.265276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.265305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.275165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.275302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.275327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.275341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.275353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.275382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.285182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.285319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.285345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.285360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.285372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.285401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.295207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.295345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.295378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.295393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.295406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.295436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.305271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.305434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.305460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.305475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.305488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.305517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.315253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.315383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.315410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.315424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.315437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.315466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.325312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.325447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.325472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.325487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.325500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.325529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.335330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.335467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.335492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.335507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.335520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.335555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.345373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.345539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.345564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.345578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.345592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.345621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.355391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.355560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.355586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.355603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.355617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.355646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.365415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.365573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.365599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.365613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.365626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.365657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.375452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.375582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.749 [2024-07-13 15:45:22.375607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.749 [2024-07-13 15:45:22.375622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.749 [2024-07-13 15:45:22.375634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.749 [2024-07-13 15:45:22.375665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.749 qpair failed and we were unable to recover it. 00:33:51.749 [2024-07-13 15:45:22.385467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.749 [2024-07-13 15:45:22.385636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.385668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.385683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.385696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.385725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.395537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.395709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.395735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.395749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.395762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.395792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.405515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.405661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.405687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.405702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.405714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.405743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.415567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.415734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.415760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.415774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.415787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.415816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.425557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.425685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.425711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.425725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.425738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.425772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.435603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.435736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.435762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.435776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.435789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.435818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.445644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.445800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.445827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.445841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.445857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.445894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.455650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.455777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.455803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.455817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.455830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.455861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.465666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.465791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.465818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.465832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.465845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.465884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.475752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.475904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.475932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.475947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.475959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.475988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.485742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.485879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.485906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.485920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.485933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.485963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.495773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.495941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.495970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.495984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.495998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.496027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:51.750 [2024-07-13 15:45:22.505816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:51.750 [2024-07-13 15:45:22.505960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:51.750 [2024-07-13 15:45:22.505986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:51.750 [2024-07-13 15:45:22.506001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:51.750 [2024-07-13 15:45:22.506015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:51.750 [2024-07-13 15:45:22.506045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:51.750 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.515834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.010 [2024-07-13 15:45:22.516020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.010 [2024-07-13 15:45:22.516047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.010 [2024-07-13 15:45:22.516062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.010 [2024-07-13 15:45:22.516081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.010 [2024-07-13 15:45:22.516111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.010 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.525878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.010 [2024-07-13 15:45:22.526011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.010 [2024-07-13 15:45:22.526038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.010 [2024-07-13 15:45:22.526052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.010 [2024-07-13 15:45:22.526065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.010 [2024-07-13 15:45:22.526095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.010 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.535917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.010 [2024-07-13 15:45:22.536045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.010 [2024-07-13 15:45:22.536072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.010 [2024-07-13 15:45:22.536087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.010 [2024-07-13 15:45:22.536100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.010 [2024-07-13 15:45:22.536132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.010 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.545908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.010 [2024-07-13 15:45:22.546040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.010 [2024-07-13 15:45:22.546066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.010 [2024-07-13 15:45:22.546081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.010 [2024-07-13 15:45:22.546094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.010 [2024-07-13 15:45:22.546125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.010 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.556017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.010 [2024-07-13 15:45:22.556151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.010 [2024-07-13 15:45:22.556177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.010 [2024-07-13 15:45:22.556191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.010 [2024-07-13 15:45:22.556210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.010 [2024-07-13 15:45:22.556241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.010 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.565981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.010 [2024-07-13 15:45:22.566131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.010 [2024-07-13 15:45:22.566157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.010 [2024-07-13 15:45:22.566171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.010 [2024-07-13 15:45:22.566185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.010 [2024-07-13 15:45:22.566216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.010 qpair failed and we were unable to recover it. 00:33:52.010 [2024-07-13 15:45:22.576104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.576256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.576282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.576296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.576309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.576341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.586040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.586175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.586201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.586215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.586229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.586258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.596100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.596242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.596268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.596282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.596295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.596324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.606130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.606262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.606288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.606309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.606323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.606353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.616139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.616270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.616296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.616310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.616323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.616352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.626148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.626286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.626311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.626326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.626339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.626368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.636215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.636372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.636398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.636412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.636425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.636454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.646271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.646447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.646473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.646487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.646500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.646542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.656246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.656396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.656423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.656437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.656451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.656482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.666291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.666451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.666477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.666492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.666505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.666534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.676395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.676529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.676554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.676568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.676581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.676611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.686363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.686540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.686566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.686581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.686594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.686638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.696354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.696478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.696508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.696523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.696537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.696566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.706400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.706537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.011 [2024-07-13 15:45:22.706563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.011 [2024-07-13 15:45:22.706577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.011 [2024-07-13 15:45:22.706591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.011 [2024-07-13 15:45:22.706620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.011 qpair failed and we were unable to recover it. 00:33:52.011 [2024-07-13 15:45:22.716412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.011 [2024-07-13 15:45:22.716554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.012 [2024-07-13 15:45:22.716579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.012 [2024-07-13 15:45:22.716593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.012 [2024-07-13 15:45:22.716607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.012 [2024-07-13 15:45:22.716637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.012 qpair failed and we were unable to recover it. 00:33:52.012 [2024-07-13 15:45:22.726435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.012 [2024-07-13 15:45:22.726568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.012 [2024-07-13 15:45:22.726594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.012 [2024-07-13 15:45:22.726608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.012 [2024-07-13 15:45:22.726621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.012 [2024-07-13 15:45:22.726651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.012 qpair failed and we were unable to recover it. 00:33:52.012 [2024-07-13 15:45:22.736470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.012 [2024-07-13 15:45:22.736622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.012 [2024-07-13 15:45:22.736650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.012 [2024-07-13 15:45:22.736669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.012 [2024-07-13 15:45:22.736683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.012 [2024-07-13 15:45:22.736715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.012 qpair failed and we were unable to recover it. 00:33:52.012 [2024-07-13 15:45:22.746503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.012 [2024-07-13 15:45:22.746640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.012 [2024-07-13 15:45:22.746666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.012 [2024-07-13 15:45:22.746681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.012 [2024-07-13 15:45:22.746694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.012 [2024-07-13 15:45:22.746723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.012 qpair failed and we were unable to recover it. 00:33:52.012 [2024-07-13 15:45:22.756532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.012 [2024-07-13 15:45:22.756678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.012 [2024-07-13 15:45:22.756703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.012 [2024-07-13 15:45:22.756718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.012 [2024-07-13 15:45:22.756731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.012 [2024-07-13 15:45:22.756760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.012 qpair failed and we were unable to recover it. 00:33:52.012 [2024-07-13 15:45:22.766549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.012 [2024-07-13 15:45:22.766686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.012 [2024-07-13 15:45:22.766711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.012 [2024-07-13 15:45:22.766726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.012 [2024-07-13 15:45:22.766739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.012 [2024-07-13 15:45:22.766768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.012 qpair failed and we were unable to recover it. 00:33:52.272 [2024-07-13 15:45:22.776548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.272 [2024-07-13 15:45:22.776681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.272 [2024-07-13 15:45:22.776708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.272 [2024-07-13 15:45:22.776722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.776735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.776765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.786600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.786729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.786760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.786775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.786789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.786818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.796622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.796766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.796792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.796807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.796820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.796850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.806655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.806787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.806813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.806828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.806842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.806880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.816719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.816850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.816883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.816898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.816912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.816942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.826721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.826873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.826899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.826913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.826927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.826963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.836749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.836895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.836920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.836935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.836948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.836981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.846782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.846937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.846962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.846976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.846991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.847021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.856830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.856970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.856995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.857010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.857024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.857054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.866856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.867016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.867042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.867057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.867069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.867099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.876857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.876999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.877030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.877045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.877058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.877087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.886885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.887030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.887055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.887069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.887082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.887111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.896904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.897034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.897059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.897073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.897087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.897116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.907018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.907154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.907179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.907194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.273 [2024-07-13 15:45:22.907206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.273 [2024-07-13 15:45:22.907235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.273 qpair failed and we were unable to recover it. 00:33:52.273 [2024-07-13 15:45:22.916954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.273 [2024-07-13 15:45:22.917093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.273 [2024-07-13 15:45:22.917120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.273 [2024-07-13 15:45:22.917134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.917152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.917182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.926994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.927127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.927153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.927167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.927180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.927209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.937005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.937138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.937162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.937176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.937189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.937220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.947061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.947189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.947214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.947229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.947242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.947272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.957157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.957291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.957316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.957330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.957343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.957372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.967086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.967221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.967246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.967261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.967274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.967303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.977186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.977327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.977353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.977367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.977380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.977409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.987143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.987297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.987323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.987337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.987350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.987379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:22.997179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:22.997316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:22.997341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:22.997356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:22.997369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:22.997398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:23.007223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:23.007361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:23.007387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:23.007407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:23.007421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:23.007451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:23.017327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:23.017471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:23.017497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:23.017511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:23.017524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:23.017553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.274 [2024-07-13 15:45:23.027291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.274 [2024-07-13 15:45:23.027433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.274 [2024-07-13 15:45:23.027458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.274 [2024-07-13 15:45:23.027472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.274 [2024-07-13 15:45:23.027486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.274 [2024-07-13 15:45:23.027515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.274 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.037316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.037494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.037520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.037534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.037548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.037577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.047315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.047454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.047480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.047495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.047508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.047538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.057345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.057472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.057497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.057512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.057525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.057556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.067409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.067551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.067576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.067591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.067605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.067634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.077443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.077620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.077645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.077659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.077672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.077701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.087453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.087637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.087663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.087678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.087691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.087724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.097494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.097624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.097650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.097671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.097685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.097715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.107527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.107661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.107686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.107701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.107713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.107742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.117537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.117688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.117713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.117728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.117741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.117771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.127547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.127684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.127709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.535 [2024-07-13 15:45:23.127723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.535 [2024-07-13 15:45:23.127737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.535 [2024-07-13 15:45:23.127766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.535 qpair failed and we were unable to recover it. 00:33:52.535 [2024-07-13 15:45:23.137577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.535 [2024-07-13 15:45:23.137710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.535 [2024-07-13 15:45:23.137735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.137750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.137763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.137792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.147617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.147755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.147782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.147797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.147810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.147839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.157698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.157834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.157860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.157887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.157902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.157932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.167665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.167799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.167824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.167839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.167852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.167888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.177710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.177845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.177876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.177893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.177906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.177935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.187736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.187882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.187912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.187927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.187940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.187970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.197758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.197909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.197935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.197949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.197962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.197991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.207810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.207949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.207975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.207989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.208002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.208032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.217821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.217964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.217989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.218004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.218016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.218046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.227929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.228063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.228089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.228103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.228115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.228150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.237929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.238065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.238090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.238105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.238118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.238160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.247933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.248089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.248115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.248130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.248143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.248173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.257931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.258061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.258087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.258102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.258115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.258144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.267948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.268090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.268115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.536 [2024-07-13 15:45:23.268130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.536 [2024-07-13 15:45:23.268143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.536 [2024-07-13 15:45:23.268172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.536 qpair failed and we were unable to recover it. 00:33:52.536 [2024-07-13 15:45:23.278000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.536 [2024-07-13 15:45:23.278131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.536 [2024-07-13 15:45:23.278159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.537 [2024-07-13 15:45:23.278174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.537 [2024-07-13 15:45:23.278186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.537 [2024-07-13 15:45:23.278214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.537 qpair failed and we were unable to recover it. 00:33:52.537 [2024-07-13 15:45:23.288008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.537 [2024-07-13 15:45:23.288138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.537 [2024-07-13 15:45:23.288164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.537 [2024-07-13 15:45:23.288178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.537 [2024-07-13 15:45:23.288191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.537 [2024-07-13 15:45:23.288220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.537 qpair failed and we were unable to recover it. 00:33:52.537 [2024-07-13 15:45:23.298066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.537 [2024-07-13 15:45:23.298199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.537 [2024-07-13 15:45:23.298224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.537 [2024-07-13 15:45:23.298238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.537 [2024-07-13 15:45:23.298251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.537 [2024-07-13 15:45:23.298280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.537 qpair failed and we were unable to recover it. 00:33:52.796 [2024-07-13 15:45:23.308116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.796 [2024-07-13 15:45:23.308273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.796 [2024-07-13 15:45:23.308300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.796 [2024-07-13 15:45:23.308315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.796 [2024-07-13 15:45:23.308329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.796 [2024-07-13 15:45:23.308371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.796 qpair failed and we were unable to recover it. 00:33:52.796 [2024-07-13 15:45:23.318127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.796 [2024-07-13 15:45:23.318284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.796 [2024-07-13 15:45:23.318310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.796 [2024-07-13 15:45:23.318325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.796 [2024-07-13 15:45:23.318343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.796 [2024-07-13 15:45:23.318373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.796 qpair failed and we were unable to recover it. 00:33:52.796 [2024-07-13 15:45:23.328210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.796 [2024-07-13 15:45:23.328349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.796 [2024-07-13 15:45:23.328374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.796 [2024-07-13 15:45:23.328388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.796 [2024-07-13 15:45:23.328401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.796 [2024-07-13 15:45:23.328430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.796 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.338206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.338339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.338368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.338382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.338396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.338425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.348229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.348407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.348432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.348446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.348459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.348490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.358224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.358372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.358398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.358413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.358426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.358455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.368281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.368469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.368495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.368509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.368522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.368552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.378249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.378382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.378408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.378422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.378435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.378466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.388287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.388442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.388468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.388482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.388495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.388524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.398314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.398450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.398475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.398490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.398503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.398532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.408474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.408624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.408650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.408671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.408684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.408714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.418362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.418500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.418525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.418539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.418552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.418584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.428415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.428549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.428575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.428590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.428603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.428632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.438439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.438579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.438604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.438618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.438632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.438660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.448454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.448600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.448625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.448639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.448652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.448682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.458485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.458632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.458658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.458672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.458685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.458714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.468502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.797 [2024-07-13 15:45:23.468643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.797 [2024-07-13 15:45:23.468669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.797 [2024-07-13 15:45:23.468683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.797 [2024-07-13 15:45:23.468696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.797 [2024-07-13 15:45:23.468725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.797 qpair failed and we were unable to recover it. 00:33:52.797 [2024-07-13 15:45:23.478695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.478863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.478900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.478918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.478931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.478962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.488580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.488740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.488767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.488781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.488794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.488824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.498670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.498800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.498826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.498847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.498861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.498911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.508678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.508848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.508882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.508898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.508911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.508941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.518747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.518898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.518927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.518943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.518956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.518986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.528703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.528890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.528916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.528930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.528944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.528973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.538800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.538951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.538979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.538996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.539010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.539040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.548736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.548881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.548907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.548921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.548935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.548964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:52.798 [2024-07-13 15:45:23.558803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:52.798 [2024-07-13 15:45:23.558946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:52.798 [2024-07-13 15:45:23.558972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:52.798 [2024-07-13 15:45:23.558986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:52.798 [2024-07-13 15:45:23.558999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:52.798 [2024-07-13 15:45:23.559028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:52.798 qpair failed and we were unable to recover it. 00:33:53.057 [2024-07-13 15:45:23.568836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.057 [2024-07-13 15:45:23.569037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.057 [2024-07-13 15:45:23.569063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.057 [2024-07-13 15:45:23.569078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.057 [2024-07-13 15:45:23.569091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.057 [2024-07-13 15:45:23.569121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.057 qpair failed and we were unable to recover it. 00:33:53.057 [2024-07-13 15:45:23.578821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.057 [2024-07-13 15:45:23.578967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.057 [2024-07-13 15:45:23.578994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.057 [2024-07-13 15:45:23.579008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.057 [2024-07-13 15:45:23.579024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.057 [2024-07-13 15:45:23.579054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.057 qpair failed and we were unable to recover it. 00:33:53.057 [2024-07-13 15:45:23.588854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.057 [2024-07-13 15:45:23.588990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.057 [2024-07-13 15:45:23.589021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.057 [2024-07-13 15:45:23.589036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.589050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.589079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.598901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.599036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.599061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.599076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.599089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.599118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.608902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.609035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.609061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.609075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.609088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.609118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.618925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.619058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.619083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.619098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.619111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.619140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.628951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.629087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.629113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.629127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.629139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.629174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.638984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.639124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.639149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.639163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.639176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.639205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.649048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.649228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.649253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.649267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.649279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.649308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.659024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.659151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.659176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.659191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.659204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.659233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.669093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.669293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.669319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.669333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.669346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.669375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.679114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.679263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.679293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.679309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.679322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.679351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.689162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.689298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.689323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.689338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.689351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.689380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.699163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.699296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.699321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.699336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.699348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.699378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.709199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.709332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.709357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.709372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.709385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.709414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.719225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.719366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.719392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.719406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.719427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.719457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.058 qpair failed and we were unable to recover it. 00:33:53.058 [2024-07-13 15:45:23.729225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.058 [2024-07-13 15:45:23.729353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.058 [2024-07-13 15:45:23.729378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.058 [2024-07-13 15:45:23.729392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.058 [2024-07-13 15:45:23.729405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.058 [2024-07-13 15:45:23.729434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.739261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.739406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.739432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.739446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.739459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.739488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.749401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.749533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.749558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.749572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.749585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.749615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.759313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.759446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.759471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.759485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.759498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.759529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.769352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.769488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.769514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.769528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.769540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.769570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.779374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.779510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.779536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.779551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.779564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.779593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.789422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.789551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.789577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.789591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.789604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.789633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.799429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.799562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.799588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.799602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.799615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.799644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.809462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.809598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.809623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.809637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.809655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.809685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.059 [2024-07-13 15:45:23.819487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.059 [2024-07-13 15:45:23.819621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.059 [2024-07-13 15:45:23.819646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.059 [2024-07-13 15:45:23.819661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.059 [2024-07-13 15:45:23.819675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.059 [2024-07-13 15:45:23.819706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.059 qpair failed and we were unable to recover it. 00:33:53.318 [2024-07-13 15:45:23.829510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.318 [2024-07-13 15:45:23.829677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.318 [2024-07-13 15:45:23.829702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.318 [2024-07-13 15:45:23.829717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.318 [2024-07-13 15:45:23.829730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.318 [2024-07-13 15:45:23.829761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.318 qpair failed and we were unable to recover it. 00:33:53.318 [2024-07-13 15:45:23.839565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.318 [2024-07-13 15:45:23.839707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.318 [2024-07-13 15:45:23.839733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.318 [2024-07-13 15:45:23.839748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.318 [2024-07-13 15:45:23.839761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.318 [2024-07-13 15:45:23.839790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.318 qpair failed and we were unable to recover it. 00:33:53.318 [2024-07-13 15:45:23.849602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.318 [2024-07-13 15:45:23.849744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.318 [2024-07-13 15:45:23.849769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.318 [2024-07-13 15:45:23.849784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.318 [2024-07-13 15:45:23.849798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.318 [2024-07-13 15:45:23.849827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.318 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.859607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.859737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.859763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.859778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.859791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.859821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.869627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.869761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.869788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.869803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.869817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.869846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.879777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.879916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.879943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.879958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.879972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.880003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.889711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.889861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.889896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.889911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.889924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.889954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.899814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.899947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.899974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.899994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.900008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.900037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.909765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.909892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.909919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.909933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.909947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.909978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.919783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.919936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.919965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.919980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.919993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.920023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.929797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.929935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.929961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.929976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.929989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.930021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.939823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.939968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.939994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.940009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.940022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.940051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.949856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.950035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.950061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.950076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.950089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.950121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.959906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.960048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.960074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.960088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.960102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.960131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.969948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.970117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.970143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.970157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.970171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.970202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.979980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.980140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.980172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.980187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.980200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.980229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:23.989974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.319 [2024-07-13 15:45:23.990103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.319 [2024-07-13 15:45:23.990133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.319 [2024-07-13 15:45:23.990148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.319 [2024-07-13 15:45:23.990161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.319 [2024-07-13 15:45:23.990191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.319 qpair failed and we were unable to recover it. 00:33:53.319 [2024-07-13 15:45:24.000006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.000150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.000176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.000190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.000203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.000234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.010052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.010190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.010216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.010231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.010244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.010273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.020078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.020209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.020234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.020248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.020262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.020290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.030124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.030254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.030278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.030293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.030306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.030342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.040108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.040241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.040266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.040280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.040293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.040322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.050127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.050267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.050292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.050306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.050320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.050349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.060173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.060301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.060327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.060341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.060354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.060384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.070189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.070320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.070345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.070359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.070373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.070402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.320 [2024-07-13 15:45:24.080280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.320 [2024-07-13 15:45:24.080465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.320 [2024-07-13 15:45:24.080495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.320 [2024-07-13 15:45:24.080511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.320 [2024-07-13 15:45:24.080524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.320 [2024-07-13 15:45:24.080553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.320 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.090242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.090374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.090400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.579 [2024-07-13 15:45:24.090414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.579 [2024-07-13 15:45:24.090427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.579 [2024-07-13 15:45:24.090458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.579 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.100332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.100465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.100491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.579 [2024-07-13 15:45:24.100505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.579 [2024-07-13 15:45:24.100519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.579 [2024-07-13 15:45:24.100548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.579 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.110355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.110526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.110552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.579 [2024-07-13 15:45:24.110566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.579 [2024-07-13 15:45:24.110579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.579 [2024-07-13 15:45:24.110608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.579 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.120374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.120506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.120531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.579 [2024-07-13 15:45:24.120545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.579 [2024-07-13 15:45:24.120557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.579 [2024-07-13 15:45:24.120593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.579 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.130388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.130525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.130551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.579 [2024-07-13 15:45:24.130566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.579 [2024-07-13 15:45:24.130578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.579 [2024-07-13 15:45:24.130609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.579 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.140438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.140568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.140594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.579 [2024-07-13 15:45:24.140608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.579 [2024-07-13 15:45:24.140621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.579 [2024-07-13 15:45:24.140650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.579 qpair failed and we were unable to recover it. 00:33:53.579 [2024-07-13 15:45:24.150443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.579 [2024-07-13 15:45:24.150580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.579 [2024-07-13 15:45:24.150608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.150622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.150636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.150665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.160489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.160676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.160701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.160716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.160729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.160758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.170519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.170702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.170727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.170742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.170756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.170785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.180505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.180637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.180662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.180676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.180690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.180719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.190561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.190695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.190721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.190735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.190748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.190792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.200572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.200708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.200734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.200749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.200762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.200791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.210637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.210771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.210797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.210812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.210830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.210860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.220647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.220785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.220811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.220825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.220838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.220876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.230658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.230790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.230815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.230830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.230842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.230879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.240679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.240817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.240842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.240856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.240877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.240907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.250716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.250848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.250881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.250896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.250910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.250940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.260749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.260907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.260932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.260947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.260960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.260989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.270781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.270919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.270945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.270959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.270973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.271002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.280890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.281027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.281051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.281065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.580 [2024-07-13 15:45:24.281077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.580 [2024-07-13 15:45:24.281106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.580 qpair failed and we were unable to recover it. 00:33:53.580 [2024-07-13 15:45:24.290856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.580 [2024-07-13 15:45:24.290999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.580 [2024-07-13 15:45:24.291024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.580 [2024-07-13 15:45:24.291038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.581 [2024-07-13 15:45:24.291052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.581 [2024-07-13 15:45:24.291082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.581 qpair failed and we were unable to recover it. 00:33:53.581 [2024-07-13 15:45:24.300886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.581 [2024-07-13 15:45:24.301013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.581 [2024-07-13 15:45:24.301038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.581 [2024-07-13 15:45:24.301059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.581 [2024-07-13 15:45:24.301073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.581 [2024-07-13 15:45:24.301102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.581 qpair failed and we were unable to recover it. 00:33:53.581 [2024-07-13 15:45:24.310888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.581 [2024-07-13 15:45:24.311081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.581 [2024-07-13 15:45:24.311105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.581 [2024-07-13 15:45:24.311120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.581 [2024-07-13 15:45:24.311133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.581 [2024-07-13 15:45:24.311164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.581 qpair failed and we were unable to recover it. 00:33:53.581 [2024-07-13 15:45:24.320925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.581 [2024-07-13 15:45:24.321066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.581 [2024-07-13 15:45:24.321092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.581 [2024-07-13 15:45:24.321106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.581 [2024-07-13 15:45:24.321119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.581 [2024-07-13 15:45:24.321148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.581 qpair failed and we were unable to recover it. 00:33:53.581 [2024-07-13 15:45:24.330936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.581 [2024-07-13 15:45:24.331065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.581 [2024-07-13 15:45:24.331091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.581 [2024-07-13 15:45:24.331105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.581 [2024-07-13 15:45:24.331117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.581 [2024-07-13 15:45:24.331147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.581 qpair failed and we were unable to recover it. 00:33:53.581 [2024-07-13 15:45:24.340980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.581 [2024-07-13 15:45:24.341120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.581 [2024-07-13 15:45:24.341145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.581 [2024-07-13 15:45:24.341160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.581 [2024-07-13 15:45:24.341173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.581 [2024-07-13 15:45:24.341202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.581 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.351021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.351157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.351182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.351196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.351210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.351240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.361039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.361186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.361211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.361225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.361238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.361266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.371169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.371303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.371329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.371343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.371356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.371384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.381101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.381231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.381257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.381271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.381283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.381314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.391107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.391239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.391269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.391285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.391298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.391327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.401186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.401325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.401350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.401364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.401377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.401407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.411185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.411338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.411363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.411377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.411390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.411420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.421213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.421346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.421370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.421385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.421398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.421426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.431250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.431383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.431408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.431422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.431435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.431464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.441306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.441450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.441476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.441490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.841 [2024-07-13 15:45:24.441503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.841 [2024-07-13 15:45:24.441532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.841 qpair failed and we were unable to recover it. 00:33:53.841 [2024-07-13 15:45:24.451359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.841 [2024-07-13 15:45:24.451517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.841 [2024-07-13 15:45:24.451546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.841 [2024-07-13 15:45:24.451561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.451575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.451604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.461410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.461544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.461569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.461584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.461597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.461626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.471365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.471543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.471568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.471583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.471596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.471626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.481390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.481544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.481578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.481593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.481606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.481635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.491439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.491573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.491598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.491612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.491625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.491654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.501422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.501557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.501582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.501596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.501609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.501639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.511449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.511578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.511603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.511617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.511630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.511659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.521497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.521678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.521703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.521718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.521731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.521766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.531520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.531656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.531681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.531696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.531709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.531738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.541530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.541660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.541686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.541701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.541713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.541742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.551558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.551727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.551753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.551767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.551780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.551812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.561630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.561783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.561808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.561822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.561835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.561873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.571617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.571778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.571808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.571824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.571837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.571873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.581675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.581802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.581827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.581841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.581855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.842 [2024-07-13 15:45:24.581894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.842 qpair failed and we were unable to recover it. 00:33:53.842 [2024-07-13 15:45:24.591678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.842 [2024-07-13 15:45:24.591815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.842 [2024-07-13 15:45:24.591841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.842 [2024-07-13 15:45:24.591855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.842 [2024-07-13 15:45:24.591878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.843 [2024-07-13 15:45:24.591911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.843 qpair failed and we were unable to recover it. 00:33:53.843 [2024-07-13 15:45:24.601721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:53.843 [2024-07-13 15:45:24.601884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:53.843 [2024-07-13 15:45:24.601919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:53.843 [2024-07-13 15:45:24.601933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.843 [2024-07-13 15:45:24.601946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:53.843 [2024-07-13 15:45:24.601975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.843 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.611723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.611871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.611897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.611912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.611935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.611966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.621778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.621920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.621945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.621959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.621972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.622015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.631783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.631917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.631943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.631957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.631970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.632002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.641839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.642036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.642065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.642080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.642093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.642122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.651869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.652040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.652067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.652081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.652095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.652124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.661936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.662079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.662105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.662124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.662138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.662168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.671900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.672030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.672055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.672070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.672083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.672112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.681977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.682116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.104 [2024-07-13 15:45:24.682142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.104 [2024-07-13 15:45:24.682157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.104 [2024-07-13 15:45:24.682170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.104 [2024-07-13 15:45:24.682200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.104 qpair failed and we were unable to recover it. 00:33:54.104 [2024-07-13 15:45:24.691972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.104 [2024-07-13 15:45:24.692110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.692135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.692149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.692162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.692192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.702005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.702162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.702189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.702212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.702227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.702259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.712024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.712155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.712181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.712195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.712208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.712237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.722067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.722202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.722227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.722241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.722254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.722283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.732076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.732215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.732240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.732255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.732268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.732298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.742164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.742296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.742325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.742340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.742353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.742382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.752122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.752255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.752281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.752295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.752308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.752339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.762158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.762294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.762320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.762335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.762348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.762379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.772192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.772330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.772356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.772371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.772384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.772414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.782217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.782346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.782371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.782385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.782400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.782431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.792255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.792395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.792422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.792445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.792460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.792490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.802276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.802411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.802437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.802451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.802466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.802496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.812309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.812441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.812467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.812481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.812494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.812525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.822380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.822510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.105 [2024-07-13 15:45:24.822536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.105 [2024-07-13 15:45:24.822550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.105 [2024-07-13 15:45:24.822563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.105 [2024-07-13 15:45:24.822592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.105 qpair failed and we were unable to recover it. 00:33:54.105 [2024-07-13 15:45:24.832364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.105 [2024-07-13 15:45:24.832548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.106 [2024-07-13 15:45:24.832574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.106 [2024-07-13 15:45:24.832588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.106 [2024-07-13 15:45:24.832601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.106 [2024-07-13 15:45:24.832632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.106 qpair failed and we were unable to recover it. 00:33:54.106 [2024-07-13 15:45:24.842422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.106 [2024-07-13 15:45:24.842558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.106 [2024-07-13 15:45:24.842583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.106 [2024-07-13 15:45:24.842601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.106 [2024-07-13 15:45:24.842614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.106 [2024-07-13 15:45:24.842643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.106 qpair failed and we were unable to recover it. 00:33:54.106 [2024-07-13 15:45:24.852410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.106 [2024-07-13 15:45:24.852541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.106 [2024-07-13 15:45:24.852567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.106 [2024-07-13 15:45:24.852582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.106 [2024-07-13 15:45:24.852595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.106 [2024-07-13 15:45:24.852624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.106 qpair failed and we were unable to recover it. 00:33:54.106 [2024-07-13 15:45:24.862483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.106 [2024-07-13 15:45:24.862620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.106 [2024-07-13 15:45:24.862646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.106 [2024-07-13 15:45:24.862660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.106 [2024-07-13 15:45:24.862673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.106 [2024-07-13 15:45:24.862704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.106 qpair failed and we were unable to recover it. 00:33:54.367 [2024-07-13 15:45:24.872459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.367 [2024-07-13 15:45:24.872588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.367 [2024-07-13 15:45:24.872614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.367 [2024-07-13 15:45:24.872629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.367 [2024-07-13 15:45:24.872642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.367 [2024-07-13 15:45:24.872671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.367 qpair failed and we were unable to recover it. 00:33:54.367 [2024-07-13 15:45:24.882541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.367 [2024-07-13 15:45:24.882674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.367 [2024-07-13 15:45:24.882704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.882720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.882733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.882778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.892534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.892664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.892690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.892704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.892717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.892747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.902556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.902685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.902710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.902725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.902738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.902767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.912612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.912771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.912797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.912811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.912824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.912855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.922730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.922876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.922901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.922915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.922929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.922965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.932636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.932770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.932795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.932809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.932822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.932851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.942656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.942789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.942815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.942829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.942842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.942878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.952683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.952812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.952838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.952857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.952880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.952911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.962740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.962890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.962916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.962931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.962944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.962974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.972769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.972913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.972944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.972959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.972973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.973002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.982779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.982921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.982947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.982962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.982975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.983005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:24.992819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:24.992963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.368 [2024-07-13 15:45:24.992990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.368 [2024-07-13 15:45:24.993005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.368 [2024-07-13 15:45:24.993018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.368 [2024-07-13 15:45:24.993063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.368 qpair failed and we were unable to recover it. 00:33:54.368 [2024-07-13 15:45:25.002856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.368 [2024-07-13 15:45:25.003004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.003030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.003045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.003059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.003103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.012862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.013005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.013031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.013045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.013064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.013094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.022956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.023108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.023135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.023150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.023163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.023206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.032905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.033054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.033080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.033094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.033108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.033137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.042962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.043101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.043126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.043141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.043154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.043183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.053005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.053148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.053174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.053188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.053202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.053231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.062997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.063161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.063186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.063203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.063217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.063246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.073043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.073212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.073238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.073253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.073266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.073296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.083198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.083352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.083377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.083391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.083404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.083434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.093132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.093292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.093318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.093333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.093347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.093376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.103125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.103279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.103305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.103325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.103340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.103370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.113147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.113276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.113301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.113315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.113328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.113357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.369 [2024-07-13 15:45:25.123196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.369 [2024-07-13 15:45:25.123335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.369 [2024-07-13 15:45:25.123361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.369 [2024-07-13 15:45:25.123376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.369 [2024-07-13 15:45:25.123389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.369 [2024-07-13 15:45:25.123419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.369 qpair failed and we were unable to recover it. 00:33:54.630 [2024-07-13 15:45:25.133307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.630 [2024-07-13 15:45:25.133473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.630 [2024-07-13 15:45:25.133500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.630 [2024-07-13 15:45:25.133514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.630 [2024-07-13 15:45:25.133528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.630 [2024-07-13 15:45:25.133557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.630 qpair failed and we were unable to recover it. 00:33:54.630 [2024-07-13 15:45:25.143219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.630 [2024-07-13 15:45:25.143357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.630 [2024-07-13 15:45:25.143383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.630 [2024-07-13 15:45:25.143398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.630 [2024-07-13 15:45:25.143411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.630 [2024-07-13 15:45:25.143440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.630 qpair failed and we were unable to recover it. 00:33:54.630 [2024-07-13 15:45:25.153306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.630 [2024-07-13 15:45:25.153442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.630 [2024-07-13 15:45:25.153468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.630 [2024-07-13 15:45:25.153482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.630 [2024-07-13 15:45:25.153496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.630 [2024-07-13 15:45:25.153527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.630 qpair failed and we were unable to recover it. 00:33:54.630 [2024-07-13 15:45:25.163281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.630 [2024-07-13 15:45:25.163419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.630 [2024-07-13 15:45:25.163444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.630 [2024-07-13 15:45:25.163459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.163472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.163502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.173337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.173473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.173499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.173513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.173526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.173557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.183333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.183476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.183501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.183515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.183528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.183558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.193447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.193577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.193602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.193623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.193637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.193667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.203374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.203504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.203529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.203543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.203556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.203586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.213393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.213520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.213544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.213559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.213572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.213601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.223451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.223580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.223606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.223620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.223633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.223677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.233523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.233689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.233714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.233729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.233741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.233770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.243528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.243666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.243691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.243705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.243718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.243749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.253541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.253683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.253709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.253723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.253736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.253767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.263573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.263703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.263729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.263743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.263756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.263786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.273576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.273703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.273729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.273743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.273756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.273787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.283649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.283799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.283828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.283843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.283855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.283894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.293652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.293785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.293811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.293825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.293838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.631 [2024-07-13 15:45:25.293876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.631 qpair failed and we were unable to recover it. 00:33:54.631 [2024-07-13 15:45:25.303678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.631 [2024-07-13 15:45:25.303832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.631 [2024-07-13 15:45:25.303858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.631 [2024-07-13 15:45:25.303879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.631 [2024-07-13 15:45:25.303894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.303923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.313706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.313841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.313871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.313888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.313901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.313931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.323760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.323925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.323961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.323979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.323994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.324033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.333778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.333923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.333949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.333964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.333977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.334006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.343792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.343961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.343987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.344001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.344014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.344043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.353816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.353973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.353998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.354013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.354025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.354055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.363933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.364069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.364094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.364108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.364121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.364152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.373886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.374017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.374047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.374062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.374075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.374119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.383942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.384109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.384134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.384148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.384162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.384191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.632 [2024-07-13 15:45:25.393914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.632 [2024-07-13 15:45:25.394054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.632 [2024-07-13 15:45:25.394079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.632 [2024-07-13 15:45:25.394093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.632 [2024-07-13 15:45:25.394106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.632 [2024-07-13 15:45:25.394136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.632 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.404004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.404187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.404212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.404227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.404241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.404269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.414014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.414163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.414189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.414204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.414222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.414252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.423999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.424133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.424158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.424172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.424186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.424217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.434040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.434175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.434200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.434215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.434229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.434260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.444099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.444287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.444312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.444327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.444340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.444368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.454107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.454242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.454267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.454281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.454295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.454323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.464152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.464307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.464332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.464346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.464359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.464388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.474201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.474345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.474371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.474385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.474398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.474428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.484199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.484340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.484365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.484379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.484392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.484421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.494211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.494384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.494410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.494424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.494437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.494468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.504226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.504357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.504382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.504396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.504414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7020000b90 00:33:54.892 [2024-07-13 15:45:25.504445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.514274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.514404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.514436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.514451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.514465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f2450 00:33:54.892 [2024-07-13 15:45:25.514495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.524372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:54.892 [2024-07-13 15:45:25.524509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:54.892 [2024-07-13 15:45:25.524535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:54.892 [2024-07-13 15:45:25.524550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:54.892 [2024-07-13 15:45:25.524564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5f2450 00:33:54.892 [2024-07-13 15:45:25.524592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:54.892 qpair failed and we were unable to recover it. 00:33:54.892 [2024-07-13 15:45:25.524721] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:54.892 A controller has encountered a failure and is being reset. 00:33:54.892 Controller properly reset. 00:33:54.892 Initializing NVMe Controllers 00:33:54.892 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:54.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:54.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:54.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:54.892 Initialization complete. Launching workers. 00:33:54.892 Starting thread on core 1 00:33:54.893 Starting thread on core 2 00:33:54.893 Starting thread on core 3 00:33:54.893 Starting thread on core 0 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:54.893 00:33:54.893 real 0m10.782s 00:33:54.893 user 0m17.264s 00:33:54.893 sys 0m5.562s 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.893 ************************************ 00:33:54.893 END TEST nvmf_target_disconnect_tc2 00:33:54.893 ************************************ 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:54.893 rmmod nvme_tcp 00:33:54.893 rmmod nvme_fabrics 00:33:54.893 rmmod nvme_keyring 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1263060 ']' 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1263060 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1263060 ']' 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1263060 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:54.893 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1263060 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1263060' 00:33:55.151 killing process with pid 1263060 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1263060 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1263060 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:55.151 15:45:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.680 15:45:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:57.680 00:33:57.680 real 0m15.613s 00:33:57.680 user 0m43.388s 00:33:57.680 sys 0m7.613s 00:33:57.680 15:45:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:57.680 15:45:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:57.680 ************************************ 00:33:57.680 END TEST nvmf_target_disconnect 00:33:57.680 ************************************ 00:33:57.680 15:45:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:57.680 15:45:27 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:57.680 15:45:27 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:57.680 15:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.680 15:45:27 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:57.680 00:33:57.680 real 27m9.784s 00:33:57.680 user 73m50.946s 00:33:57.680 sys 6m25.897s 00:33:57.680 15:45:27 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:57.680 15:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.680 ************************************ 00:33:57.680 END TEST nvmf_tcp 00:33:57.680 ************************************ 00:33:57.680 15:45:27 -- common/autotest_common.sh@1142 -- # return 0 00:33:57.680 15:45:27 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:57.680 15:45:27 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:57.680 15:45:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:57.680 15:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:57.680 15:45:27 -- common/autotest_common.sh@10 -- # set +x 00:33:57.680 ************************************ 00:33:57.680 START TEST spdkcli_nvmf_tcp 00:33:57.680 ************************************ 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:57.680 * Looking for test storage... 00:33:57.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.680 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1264258 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1264258 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1264258 ']' 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.681 [2024-07-13 15:45:28.128413] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:33:57.681 [2024-07-13 15:45:28.128494] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264258 ] 00:33:57.681 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.681 [2024-07-13 15:45:28.160360] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:57.681 [2024-07-13 15:45:28.188041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:57.681 [2024-07-13 15:45:28.273993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.681 [2024-07-13 15:45:28.273997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.681 15:45:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:57.681 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:57.681 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:57.681 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:57.681 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:57.681 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:57.681 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:57.681 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:57.681 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:57.681 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:57.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:57.681 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:57.681 ' 00:34:00.221 [2024-07-13 15:45:30.944277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.602 [2024-07-13 15:45:32.176569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:04.130 [2024-07-13 15:45:34.463735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:06.033 [2024-07-13 15:45:36.434044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:07.410 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:07.410 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:07.410 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:07.410 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:07.410 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:07.410 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:07.410 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:07.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:07.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:07.410 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:07.410 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:07.411 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:07.411 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:07.411 15:45:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:07.979 15:45:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:07.979 15:45:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:07.979 15:45:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:07.979 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:07.979 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.979 15:45:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:07.980 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.980 15:45:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.980 15:45:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:07.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:07.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:07.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:07.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:07.980 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:07.980 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:07.980 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:07.980 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:07.980 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:07.980 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:07.980 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:07.980 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:07.980 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:07.980 ' 00:34:13.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:13.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:13.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:13.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:13.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:13.316 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:13.316 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:13.316 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:13.316 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:13.316 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:13.316 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:13.316 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:13.316 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:13.316 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:13.316 15:45:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:13.316 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1264258 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1264258 ']' 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1264258 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264258 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264258' 00:34:13.317 killing process with pid 1264258 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1264258 00:34:13.317 15:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1264258 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1264258 ']' 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1264258 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1264258 ']' 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1264258 00:34:13.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1264258) - No such process 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1264258 is not found' 00:34:13.317 Process with pid 1264258 is not found 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:13.317 00:34:13.317 real 0m15.993s 00:34:13.317 user 0m33.887s 00:34:13.317 sys 0m0.797s 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:13.317 15:45:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.317 ************************************ 00:34:13.317 END TEST spdkcli_nvmf_tcp 00:34:13.317 ************************************ 00:34:13.317 15:45:44 -- common/autotest_common.sh@1142 -- # return 0 00:34:13.317 15:45:44 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:13.317 15:45:44 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:13.317 15:45:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:13.317 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:34:13.317 ************************************ 00:34:13.317 START TEST nvmf_identify_passthru 00:34:13.317 ************************************ 00:34:13.317 15:45:44 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:13.575 * Looking for test storage... 00:34:13.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.575 15:45:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.575 15:45:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.575 15:45:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.575 15:45:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:13.575 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:13.575 15:45:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.575 15:45:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.575 15:45:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.575 15:45:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:13.575 15:45:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.576 15:45:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.576 15:45:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.576 15:45:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.576 15:45:44 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.576 15:45:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:15.476 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:15.476 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:15.476 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:15.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:15.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.477 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:34:15.737 00:34:15.737 --- 10.0.0.2 ping statistics --- 00:34:15.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.737 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:34:15.737 00:34:15.737 --- 10.0.0.1 ping statistics --- 00:34:15.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.737 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:15.737 15:45:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:15.737 15:45:46 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:15.737 15:45:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:15.737 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.926 15:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:19.926 15:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:19.926 15:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:19.926 15:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:19.926 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.116 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:24.116 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:24.116 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:24.116 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.116 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:24.116 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:24.116 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.116 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1268867 00:34:24.117 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:24.117 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:24.117 15:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1268867 00:34:24.117 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1268867 ']' 00:34:24.117 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.117 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:24.117 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.117 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:24.117 15:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.117 [2024-07-13 15:45:54.844587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:34:24.117 [2024-07-13 15:45:54.844685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:24.117 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.377 [2024-07-13 15:45:54.890761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:24.377 [2024-07-13 15:45:54.919124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:24.377 [2024-07-13 15:45:55.011608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.377 [2024-07-13 15:45:55.011686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.377 [2024-07-13 15:45:55.011700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.377 [2024-07-13 15:45:55.011711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.377 [2024-07-13 15:45:55.011721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.377 [2024-07-13 15:45:55.014887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.377 [2024-07-13 15:45:55.014920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:24.377 [2024-07-13 15:45:55.014995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:24.377 [2024-07-13 15:45:55.014998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:24.377 15:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.377 INFO: Log level set to 20 00:34:24.377 INFO: Requests: 00:34:24.377 { 00:34:24.377 "jsonrpc": "2.0", 00:34:24.377 "method": "nvmf_set_config", 00:34:24.377 "id": 1, 00:34:24.377 "params": { 00:34:24.377 "admin_cmd_passthru": { 00:34:24.377 "identify_ctrlr": true 00:34:24.377 } 00:34:24.377 } 00:34:24.377 } 00:34:24.377 00:34:24.377 INFO: response: 00:34:24.377 { 00:34:24.377 "jsonrpc": "2.0", 00:34:24.377 "id": 1, 00:34:24.377 "result": true 00:34:24.377 } 00:34:24.377 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.377 15:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.377 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.377 INFO: Setting log level to 20 00:34:24.377 INFO: Setting log level to 20 00:34:24.377 INFO: Log level set to 20 00:34:24.377 INFO: Log level set to 20 00:34:24.377 INFO: Requests: 00:34:24.377 { 00:34:24.377 "jsonrpc": "2.0", 00:34:24.377 "method": "framework_start_init", 00:34:24.377 "id": 1 00:34:24.377 } 00:34:24.377 00:34:24.377 INFO: Requests: 00:34:24.377 { 00:34:24.377 "jsonrpc": "2.0", 00:34:24.377 "method": "framework_start_init", 00:34:24.377 "id": 1 00:34:24.377 } 00:34:24.377 00:34:24.637 [2024-07-13 15:45:55.173249] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:24.637 INFO: response: 00:34:24.637 { 00:34:24.637 "jsonrpc": "2.0", 00:34:24.637 "id": 1, 00:34:24.637 "result": true 00:34:24.637 } 00:34:24.637 00:34:24.637 INFO: response: 00:34:24.637 { 00:34:24.637 "jsonrpc": "2.0", 00:34:24.637 "id": 1, 00:34:24.637 "result": true 00:34:24.637 } 00:34:24.637 00:34:24.637 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.637 15:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:24.637 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.637 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.637 INFO: Setting log level to 40 00:34:24.637 INFO: Setting log level to 40 00:34:24.637 INFO: Setting log level to 40 00:34:24.637 [2024-07-13 15:45:55.183364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.637 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.638 15:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:24.638 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:24.638 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:24.638 15:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:24.638 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.638 15:45:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.923 Nvme0n1 00:34:27.923 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.923 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:27.923 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.924 [2024-07-13 15:45:58.072186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.924 [ 00:34:27.924 { 00:34:27.924 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:27.924 "subtype": "Discovery", 00:34:27.924 "listen_addresses": [], 00:34:27.924 "allow_any_host": true, 00:34:27.924 "hosts": [] 00:34:27.924 }, 00:34:27.924 { 00:34:27.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:27.924 "subtype": "NVMe", 00:34:27.924 "listen_addresses": [ 00:34:27.924 { 00:34:27.924 "trtype": "TCP", 00:34:27.924 "adrfam": "IPv4", 00:34:27.924 "traddr": "10.0.0.2", 00:34:27.924 "trsvcid": "4420" 00:34:27.924 } 00:34:27.924 ], 00:34:27.924 "allow_any_host": true, 00:34:27.924 "hosts": [], 00:34:27.924 "serial_number": "SPDK00000000000001", 00:34:27.924 "model_number": "SPDK bdev Controller", 00:34:27.924 "max_namespaces": 1, 00:34:27.924 "min_cntlid": 1, 00:34:27.924 "max_cntlid": 65519, 00:34:27.924 "namespaces": [ 00:34:27.924 { 00:34:27.924 "nsid": 1, 00:34:27.924 "bdev_name": "Nvme0n1", 00:34:27.924 "name": "Nvme0n1", 00:34:27.924 "nguid": "B158D083382D4F18AF5DD232C2F23392", 00:34:27.924 "uuid": "b158d083-382d-4f18-af5d-d232c2f23392" 00:34:27.924 } 00:34:27.924 ] 00:34:27.924 } 00:34:27.924 ] 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:27.924 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:27.924 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:27.924 15:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:27.924 rmmod nvme_tcp 00:34:27.924 rmmod nvme_fabrics 00:34:27.924 rmmod nvme_keyring 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1268867 ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1268867 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1268867 ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1268867 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1268867 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1268867' 00:34:27.924 killing process with pid 1268867 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1268867 00:34:27.924 15:45:58 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1268867 00:34:29.825 15:46:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:29.825 15:46:00 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:29.825 15:46:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:29.825 15:46:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:29.825 15:46:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:29.825 15:46:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.825 15:46:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:29.825 15:46:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.781 15:46:02 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:31.781 00:34:31.781 real 0m18.093s 00:34:31.781 user 0m26.727s 00:34:31.781 sys 0m2.441s 00:34:31.781 15:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:31.781 15:46:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:31.781 ************************************ 00:34:31.781 END TEST nvmf_identify_passthru 00:34:31.781 ************************************ 00:34:31.781 15:46:02 -- common/autotest_common.sh@1142 -- # return 0 00:34:31.781 15:46:02 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:31.781 15:46:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:31.781 15:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.781 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:34:31.781 ************************************ 00:34:31.781 START TEST nvmf_dif 00:34:31.781 ************************************ 00:34:31.781 15:46:02 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:31.781 * Looking for test storage... 00:34:31.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:31.781 15:46:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.781 15:46:02 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.781 15:46:02 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.781 15:46:02 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.781 15:46:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.781 15:46:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.781 15:46:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.781 15:46:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:31.781 15:46:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:31.781 15:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:31.781 15:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:31.781 15:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:31.781 15:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:31.781 15:46:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.781 15:46:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:31.781 15:46:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:31.781 15:46:02 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:31.781 15:46:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:33.680 15:46:04 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:33.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:33.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:33.681 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:33.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:34:33.681 00:34:33.681 --- 10.0.0.2 ping statistics --- 00:34:33.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.681 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:33.681 00:34:33.681 --- 10.0.0.1 ping statistics --- 00:34:33.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.681 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:33.681 15:46:04 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:34.619 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:34.619 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:34.619 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:34.619 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:34.619 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:34.619 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:34.619 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:34.619 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:34.619 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:34.619 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:34.619 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:34.878 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:34.878 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:34.878 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:34.878 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:34.878 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:34.878 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:34.878 15:46:05 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.878 15:46:05 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:34.878 15:46:05 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:34.878 15:46:05 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.878 15:46:05 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:34.878 15:46:05 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:34.878 15:46:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:34.879 15:46:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:34.879 15:46:05 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.879 15:46:05 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1272030 00:34:34.879 15:46:05 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:34.879 15:46:05 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1272030 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1272030 ']' 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:34.879 15:46:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.879 [2024-07-13 15:46:05.626394] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:34:34.879 [2024-07-13 15:46:05.626481] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.137 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.137 [2024-07-13 15:46:05.665024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:35.137 [2024-07-13 15:46:05.690772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.137 [2024-07-13 15:46:05.774532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.137 [2024-07-13 15:46:05.774579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.137 [2024-07-13 15:46:05.774607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.137 [2024-07-13 15:46:05.774618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.137 [2024-07-13 15:46:05.774629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.137 [2024-07-13 15:46:05.774659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.137 15:46:05 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:35.137 15:46:05 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:35.137 15:46:05 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.137 15:46:05 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:35.137 15:46:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 15:46:05 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.396 15:46:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:35.396 15:46:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:35.396 15:46:05 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.396 15:46:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 [2024-07-13 15:46:05.915116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.396 15:46:05 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.396 15:46:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:35.396 15:46:05 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:35.396 15:46:05 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:35.396 15:46:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 ************************************ 00:34:35.396 START TEST fio_dif_1_default 00:34:35.396 ************************************ 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 bdev_null0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.396 [2024-07-13 15:46:05.975435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:35.396 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:35.397 { 00:34:35.397 "params": { 00:34:35.397 "name": "Nvme$subsystem", 00:34:35.397 "trtype": "$TEST_TRANSPORT", 00:34:35.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.397 "adrfam": "ipv4", 00:34:35.397 "trsvcid": "$NVMF_PORT", 00:34:35.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.397 "hdgst": ${hdgst:-false}, 00:34:35.397 "ddgst": ${ddgst:-false} 00:34:35.397 }, 00:34:35.397 "method": "bdev_nvme_attach_controller" 00:34:35.397 } 00:34:35.397 EOF 00:34:35.397 )") 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:35.397 15:46:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:35.397 "params": { 00:34:35.397 "name": "Nvme0", 00:34:35.397 "trtype": "tcp", 00:34:35.397 "traddr": "10.0.0.2", 00:34:35.397 "adrfam": "ipv4", 00:34:35.397 "trsvcid": "4420", 00:34:35.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.397 "hdgst": false, 00:34:35.397 "ddgst": false 00:34:35.397 }, 00:34:35.397 "method": "bdev_nvme_attach_controller" 00:34:35.397 }' 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:35.397 15:46:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.655 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:35.655 fio-3.35 00:34:35.655 Starting 1 thread 00:34:35.655 EAL: No free 2048 kB hugepages reported on node 1 00:34:47.852 00:34:47.852 filename0: (groupid=0, jobs=1): err= 0: pid=1272256: Sat Jul 13 15:46:16 2024 00:34:47.852 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10029msec) 00:34:47.852 slat (nsec): min=5341, max=74364, avg=9355.10, stdev=4036.50 00:34:47.852 clat (usec): min=792, max=47042, avg=21082.99, stdev=20173.09 00:34:47.852 lat (usec): min=799, max=47071, avg=21092.34, stdev=20172.50 00:34:47.852 clat percentiles (usec): 00:34:47.852 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 848], 20.00th=[ 865], 00:34:47.852 | 30.00th=[ 881], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:34:47.852 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:47.852 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:34:47.852 | 99.99th=[46924] 00:34:47.852 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=758.40, stdev=23.45, samples=20 00:34:47.853 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:34:47.853 lat (usec) : 1000=49.47% 00:34:47.853 lat (msec) : 2=0.42%, 50=50.11% 00:34:47.853 cpu : usr=89.03%, sys=10.68%, ctx=23, majf=0, minf=263 00:34:47.853 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.853 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.853 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:47.853 00:34:47.853 Run status group 0 (all jobs): 00:34:47.853 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10029-10029msec 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 00:34:47.853 real 0m11.110s 00:34:47.853 user 0m9.964s 00:34:47.853 sys 0m1.337s 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 ************************************ 00:34:47.853 END TEST fio_dif_1_default 00:34:47.853 ************************************ 00:34:47.853 15:46:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:47.853 15:46:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:47.853 15:46:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:47.853 15:46:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 ************************************ 00:34:47.853 START TEST fio_dif_1_multi_subsystems 00:34:47.853 ************************************ 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 bdev_null0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 [2024-07-13 15:46:17.130975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 bdev_null1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:47.853 { 00:34:47.853 "params": { 00:34:47.853 "name": "Nvme$subsystem", 00:34:47.853 "trtype": "$TEST_TRANSPORT", 00:34:47.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.853 "adrfam": "ipv4", 00:34:47.853 "trsvcid": "$NVMF_PORT", 00:34:47.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.853 "hdgst": ${hdgst:-false}, 00:34:47.853 "ddgst": ${ddgst:-false} 00:34:47.853 }, 00:34:47.853 "method": "bdev_nvme_attach_controller" 00:34:47.853 } 00:34:47.853 EOF 00:34:47.853 )") 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:47.853 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:47.853 { 00:34:47.853 "params": { 00:34:47.853 "name": "Nvme$subsystem", 00:34:47.853 "trtype": "$TEST_TRANSPORT", 00:34:47.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.853 "adrfam": "ipv4", 00:34:47.853 "trsvcid": "$NVMF_PORT", 00:34:47.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.853 "hdgst": ${hdgst:-false}, 00:34:47.853 "ddgst": ${ddgst:-false} 00:34:47.853 }, 00:34:47.853 "method": "bdev_nvme_attach_controller" 00:34:47.853 } 00:34:47.853 EOF 00:34:47.853 )") 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:47.854 "params": { 00:34:47.854 "name": "Nvme0", 00:34:47.854 "trtype": "tcp", 00:34:47.854 "traddr": "10.0.0.2", 00:34:47.854 "adrfam": "ipv4", 00:34:47.854 "trsvcid": "4420", 00:34:47.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:47.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:47.854 "hdgst": false, 00:34:47.854 "ddgst": false 00:34:47.854 }, 00:34:47.854 "method": "bdev_nvme_attach_controller" 00:34:47.854 },{ 00:34:47.854 "params": { 00:34:47.854 "name": "Nvme1", 00:34:47.854 "trtype": "tcp", 00:34:47.854 "traddr": "10.0.0.2", 00:34:47.854 "adrfam": "ipv4", 00:34:47.854 "trsvcid": "4420", 00:34:47.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:47.854 "hdgst": false, 00:34:47.854 "ddgst": false 00:34:47.854 }, 00:34:47.854 "method": "bdev_nvme_attach_controller" 00:34:47.854 }' 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:47.854 15:46:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.854 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:47.854 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:47.854 fio-3.35 00:34:47.854 Starting 2 threads 00:34:47.854 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.822 00:34:57.822 filename0: (groupid=0, jobs=1): err= 0: pid=1273654: Sat Jul 13 15:46:28 2024 00:34:57.822 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10018msec) 00:34:57.822 slat (nsec): min=7290, max=71630, avg=10784.83, stdev=5527.92 00:34:57.822 clat (usec): min=40871, max=42868, avg=41530.55, stdev=499.27 00:34:57.822 lat (usec): min=40879, max=42923, avg=41541.33, stdev=499.35 00:34:57.822 clat percentiles (usec): 00:34:57.822 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:57.822 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:34:57.822 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:57.822 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:57.822 | 99.99th=[42730] 00:34:57.822 bw ( KiB/s): min= 352, max= 416, per=33.72%, avg=384.00, stdev=10.38, samples=20 00:34:57.822 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:34:57.822 lat (msec) : 50=100.00% 00:34:57.822 cpu : usr=93.83%, sys=5.85%, ctx=13, majf=0, minf=202 00:34:57.822 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.822 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.822 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:57.822 filename1: (groupid=0, jobs=1): err= 0: pid=1273655: Sat Jul 13 15:46:28 2024 00:34:57.822 read: IOPS=188, BW=754KiB/s (772kB/s)(7552KiB/10017msec) 00:34:57.822 slat (nsec): min=7285, max=54745, avg=10051.16, stdev=4162.59 00:34:57.822 clat (usec): min=800, max=42157, avg=21192.25, stdev=20142.63 00:34:57.822 lat (usec): min=808, max=42169, avg=21202.30, stdev=20141.89 00:34:57.822 clat percentiles (usec): 00:34:57.822 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 857], 20.00th=[ 873], 00:34:57.822 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:34:57.822 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:57.822 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:57.822 | 99.99th=[42206] 00:34:57.822 bw ( KiB/s): min= 672, max= 768, per=66.13%, avg=753.60, stdev=28.39, samples=20 00:34:57.822 iops : min= 168, max= 192, avg=188.40, stdev= 7.10, samples=20 00:34:57.822 lat (usec) : 1000=49.26% 00:34:57.822 lat (msec) : 2=0.32%, 50=50.42% 00:34:57.822 cpu : usr=94.44%, sys=5.24%, ctx=20, majf=0, minf=49 00:34:57.822 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.822 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.822 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:57.822 00:34:57.822 Run status group 0 (all jobs): 00:34:57.822 READ: bw=1139KiB/s (1166kB/s), 385KiB/s-754KiB/s (394kB/s-772kB/s), io=11.1MiB (11.7MB), run=10017-10018msec 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 00:34:57.822 real 0m11.393s 00:34:57.822 user 0m20.164s 00:34:57.822 sys 0m1.392s 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 ************************************ 00:34:57.822 END TEST fio_dif_1_multi_subsystems 00:34:57.822 ************************************ 00:34:57.822 15:46:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:34:57.822 15:46:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:57.822 15:46:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:57.822 15:46:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 ************************************ 00:34:57.822 START TEST fio_dif_rand_params 00:34:57.822 ************************************ 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 bdev_null0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.822 [2024-07-13 15:46:28.580789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:57.822 { 00:34:57.822 "params": { 00:34:57.822 "name": "Nvme$subsystem", 00:34:57.822 "trtype": "$TEST_TRANSPORT", 00:34:57.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:57.822 "adrfam": "ipv4", 00:34:57.822 "trsvcid": "$NVMF_PORT", 00:34:57.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:57.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:57.822 "hdgst": ${hdgst:-false}, 00:34:57.822 "ddgst": ${ddgst:-false} 00:34:57.822 }, 00:34:57.822 "method": "bdev_nvme_attach_controller" 00:34:57.822 } 00:34:57.822 EOF 00:34:57.822 )") 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:57.822 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:58.080 "params": { 00:34:58.080 "name": "Nvme0", 00:34:58.080 "trtype": "tcp", 00:34:58.080 "traddr": "10.0.0.2", 00:34:58.080 "adrfam": "ipv4", 00:34:58.080 "trsvcid": "4420", 00:34:58.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.080 "hdgst": false, 00:34:58.080 "ddgst": false 00:34:58.080 }, 00:34:58.080 "method": "bdev_nvme_attach_controller" 00:34:58.080 }' 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.080 15:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.339 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:58.339 ... 00:34:58.339 fio-3.35 00:34:58.339 Starting 3 threads 00:34:58.339 EAL: No free 2048 kB hugepages reported on node 1 00:35:04.905 00:35:04.905 filename0: (groupid=0, jobs=1): err= 0: pid=1275048: Sat Jul 13 15:46:34 2024 00:35:04.905 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(132MiB/5004msec) 00:35:04.905 slat (nsec): min=5229, max=33110, avg=13519.71, stdev=1942.05 00:35:04.905 clat (usec): min=4829, max=92639, avg=14210.13, stdev=13265.56 00:35:04.905 lat (usec): min=4842, max=92653, avg=14223.65, stdev=13265.58 00:35:04.905 clat percentiles (usec): 00:35:04.905 | 1.00th=[ 5276], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 8029], 00:35:04.905 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[11076], 00:35:04.905 | 70.00th=[12125], 80.00th=[12780], 90.00th=[47449], 95.00th=[50594], 00:35:04.905 | 99.00th=[54264], 99.50th=[55313], 99.90th=[91751], 99.95th=[92799], 00:35:04.905 | 99.99th=[92799] 00:35:04.905 bw ( KiB/s): min=18432, max=35328, per=33.95%, avg=26931.20, stdev=5360.13, samples=10 00:35:04.905 iops : min= 144, max= 276, avg=210.40, stdev=41.88, samples=10 00:35:04.905 lat (msec) : 10=53.36%, 20=35.92%, 50=3.70%, 100=7.01% 00:35:04.905 cpu : usr=91.98%, sys=7.20%, ctx=18, majf=0, minf=116 00:35:04.905 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.905 issued rwts: total=1055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:04.905 filename0: (groupid=0, jobs=1): err= 0: pid=1275049: Sat Jul 13 15:46:34 2024 00:35:04.905 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(136MiB/5046msec) 00:35:04.905 slat (nsec): min=4585, max=28712, avg=12880.29, stdev=2118.74 00:35:04.905 clat (usec): min=4682, max=92462, avg=13853.46, stdev=12974.83 00:35:04.905 lat (usec): min=4695, max=92476, avg=13866.35, stdev=12974.86 00:35:04.905 clat percentiles (usec): 00:35:04.905 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 7308], 00:35:04.905 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10814], 00:35:04.905 | 70.00th=[11994], 80.00th=[12780], 90.00th=[46924], 95.00th=[51119], 00:35:04.905 | 99.00th=[53740], 99.50th=[55313], 99.90th=[55837], 99.95th=[92799], 00:35:04.905 | 99.99th=[92799] 00:35:04.905 bw ( KiB/s): min=19238, max=36864, per=35.12%, avg=27856.60, stdev=5630.44, samples=10 00:35:04.905 iops : min= 150, max= 288, avg=217.60, stdev=44.04, samples=10 00:35:04.905 lat (msec) : 10=53.80%, 20=35.84%, 50=4.03%, 100=6.32% 00:35:04.905 cpu : usr=93.62%, sys=5.93%, ctx=12, majf=0, minf=32 00:35:04.905 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.905 issued rwts: total=1091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:04.905 filename0: (groupid=0, jobs=1): err= 0: pid=1275050: Sat Jul 13 15:46:34 2024 00:35:04.905 read: IOPS=195, BW=24.5MiB/s (25.6MB/s)(123MiB/5015msec) 00:35:04.905 slat (nsec): min=5000, max=34899, avg=14166.94, stdev=3951.61 00:35:04.905 clat (usec): min=4976, max=90091, avg=15315.96, stdev=14078.73 00:35:04.905 lat (usec): min=4989, max=90105, avg=15330.13, stdev=14079.10 00:35:04.905 clat percentiles (usec): 00:35:04.905 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 8029], 00:35:04.905 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11600], 00:35:04.905 | 70.00th=[12649], 80.00th=[13566], 90.00th=[49021], 95.00th=[51643], 00:35:04.905 | 99.00th=[54789], 99.50th=[55313], 99.90th=[89654], 99.95th=[89654], 00:35:04.905 | 99.99th=[89654] 00:35:04.905 bw ( KiB/s): min=17664, max=34304, per=31.56%, avg=25036.80, stdev=5404.77, samples=10 00:35:04.905 iops : min= 138, max= 268, avg=195.60, stdev=42.22, samples=10 00:35:04.905 lat (msec) : 10=47.30%, 20=39.96%, 50=5.20%, 100=7.54% 00:35:04.905 cpu : usr=87.57%, sys=9.87%, ctx=367, majf=0, minf=132 00:35:04.905 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.906 issued rwts: total=981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.906 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:04.906 00:35:04.906 Run status group 0 (all jobs): 00:35:04.906 READ: bw=77.5MiB/s (81.2MB/s), 24.5MiB/s-27.0MiB/s (25.6MB/s-28.3MB/s), io=391MiB (410MB), run=5004-5046msec 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 bdev_null0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 [2024-07-13 15:46:34.634368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 bdev_null1 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 bdev_null2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:04.906 { 00:35:04.906 "params": { 00:35:04.906 "name": "Nvme$subsystem", 00:35:04.906 "trtype": "$TEST_TRANSPORT", 00:35:04.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.906 "adrfam": "ipv4", 00:35:04.906 "trsvcid": "$NVMF_PORT", 00:35:04.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.906 "hdgst": ${hdgst:-false}, 00:35:04.906 "ddgst": ${ddgst:-false} 00:35:04.906 }, 00:35:04.906 "method": "bdev_nvme_attach_controller" 00:35:04.906 } 00:35:04.906 EOF 00:35:04.906 )") 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:04.906 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:04.906 { 00:35:04.906 "params": { 00:35:04.906 "name": "Nvme$subsystem", 00:35:04.906 "trtype": "$TEST_TRANSPORT", 00:35:04.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.906 "adrfam": "ipv4", 00:35:04.906 "trsvcid": "$NVMF_PORT", 00:35:04.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.907 "hdgst": ${hdgst:-false}, 00:35:04.907 "ddgst": ${ddgst:-false} 00:35:04.907 }, 00:35:04.907 "method": "bdev_nvme_attach_controller" 00:35:04.907 } 00:35:04.907 EOF 00:35:04.907 )") 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:04.907 { 00:35:04.907 "params": { 00:35:04.907 "name": "Nvme$subsystem", 00:35:04.907 "trtype": "$TEST_TRANSPORT", 00:35:04.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.907 "adrfam": "ipv4", 00:35:04.907 "trsvcid": "$NVMF_PORT", 00:35:04.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.907 "hdgst": ${hdgst:-false}, 00:35:04.907 "ddgst": ${ddgst:-false} 00:35:04.907 }, 00:35:04.907 "method": "bdev_nvme_attach_controller" 00:35:04.907 } 00:35:04.907 EOF 00:35:04.907 )") 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:04.907 "params": { 00:35:04.907 "name": "Nvme0", 00:35:04.907 "trtype": "tcp", 00:35:04.907 "traddr": "10.0.0.2", 00:35:04.907 "adrfam": "ipv4", 00:35:04.907 "trsvcid": "4420", 00:35:04.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:04.907 "hdgst": false, 00:35:04.907 "ddgst": false 00:35:04.907 }, 00:35:04.907 "method": "bdev_nvme_attach_controller" 00:35:04.907 },{ 00:35:04.907 "params": { 00:35:04.907 "name": "Nvme1", 00:35:04.907 "trtype": "tcp", 00:35:04.907 "traddr": "10.0.0.2", 00:35:04.907 "adrfam": "ipv4", 00:35:04.907 "trsvcid": "4420", 00:35:04.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.907 "hdgst": false, 00:35:04.907 "ddgst": false 00:35:04.907 }, 00:35:04.907 "method": "bdev_nvme_attach_controller" 00:35:04.907 },{ 00:35:04.907 "params": { 00:35:04.907 "name": "Nvme2", 00:35:04.907 "trtype": "tcp", 00:35:04.907 "traddr": "10.0.0.2", 00:35:04.907 "adrfam": "ipv4", 00:35:04.907 "trsvcid": "4420", 00:35:04.907 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:04.907 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:04.907 "hdgst": false, 00:35:04.907 "ddgst": false 00:35:04.907 }, 00:35:04.907 "method": "bdev_nvme_attach_controller" 00:35:04.907 }' 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:04.907 15:46:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.907 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:04.907 ... 00:35:04.907 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:04.907 ... 00:35:04.907 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:04.907 ... 00:35:04.907 fio-3.35 00:35:04.907 Starting 24 threads 00:35:04.907 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.114 00:35:17.114 filename0: (groupid=0, jobs=1): err= 0: pid=1275910: Sat Jul 13 15:46:46 2024 00:35:17.114 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10086msec) 00:35:17.114 slat (nsec): min=4120, max=94154, avg=50602.86, stdev=25861.57 00:35:17.114 clat (msec): min=201, max=439, avg=279.71, stdev=48.59 00:35:17.114 lat (msec): min=201, max=439, avg=279.76, stdev=48.59 00:35:17.114 clat percentiles (msec): 00:35:17.114 | 1.00th=[ 203], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 230], 00:35:17.114 | 30.00th=[ 251], 40.00th=[ 266], 50.00th=[ 279], 60.00th=[ 296], 00:35:17.114 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 338], 00:35:17.114 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:35:17.114 | 99.99th=[ 439] 00:35:17.114 bw ( KiB/s): min= 128, max= 384, per=3.43%, avg=224.00, stdev=69.06, samples=20 00:35:17.114 iops : min= 32, max= 96, avg=56.00, stdev=17.27, samples=20 00:35:17.114 lat (msec) : 250=31.25%, 500=68.75% 00:35:17.114 cpu : usr=96.98%, sys=1.98%, ctx=41, majf=0, minf=9 00:35:17.114 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:17.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.114 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.114 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.114 filename0: (groupid=0, jobs=1): err= 0: pid=1275911: Sat Jul 13 15:46:46 2024 00:35:17.114 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10083msec) 00:35:17.114 slat (usec): min=26, max=105, avg=69.30, stdev=12.92 00:35:17.114 clat (msec): min=118, max=442, avg=279.45, stdev=54.97 00:35:17.114 lat (msec): min=118, max=442, avg=279.52, stdev=54.97 00:35:17.114 clat percentiles (msec): 00:35:17.114 | 1.00th=[ 140], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 218], 00:35:17.114 | 30.00th=[ 249], 40.00th=[ 266], 50.00th=[ 279], 60.00th=[ 296], 00:35:17.114 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 347], 00:35:17.114 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 443], 99.95th=[ 443], 00:35:17.114 | 99.99th=[ 443] 00:35:17.114 bw ( KiB/s): min= 128, max= 368, per=3.43%, avg=224.00, stdev=67.68, samples=20 00:35:17.114 iops : min= 32, max= 92, avg=56.00, stdev=16.92, samples=20 00:35:17.114 lat (msec) : 250=33.33%, 500=66.67% 00:35:17.114 cpu : usr=97.20%, sys=1.84%, ctx=42, majf=0, minf=9 00:35:17.114 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:17.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.114 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.114 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.114 filename0: (groupid=0, jobs=1): err= 0: pid=1275912: Sat Jul 13 15:46:46 2024 00:35:17.114 read: IOPS=63, BW=256KiB/s (262kB/s)(2584KiB/10099msec) 00:35:17.114 slat (nsec): min=8206, max=99908, avg=40452.41, stdev=27945.22 00:35:17.114 clat (msec): min=147, max=444, avg=249.07, stdev=47.64 00:35:17.114 lat (msec): min=147, max=444, avg=249.11, stdev=47.65 00:35:17.114 clat percentiles (msec): 00:35:17.114 | 1.00th=[ 155], 5.00th=[ 201], 10.00th=[ 205], 20.00th=[ 209], 00:35:17.114 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 236], 60.00th=[ 257], 00:35:17.114 | 70.00th=[ 271], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 326], 00:35:17.114 | 99.00th=[ 380], 99.50th=[ 418], 99.90th=[ 443], 99.95th=[ 443], 00:35:17.114 | 99.99th=[ 443] 00:35:17.114 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=252.00, stdev=79.39, samples=20 00:35:17.114 iops : min= 32, max= 96, avg=63.00, stdev=19.85, samples=20 00:35:17.114 lat (msec) : 250=57.59%, 500=42.41% 00:35:17.114 cpu : usr=97.93%, sys=1.65%, ctx=19, majf=0, minf=9 00:35:17.114 IO depths : 1=2.5%, 2=7.6%, 4=21.5%, 8=58.4%, 16=10.1%, 32=0.0%, >=64=0.0% 00:35:17.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.114 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.114 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename0: (groupid=0, jobs=1): err= 0: pid=1275913: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=69, BW=278KiB/s (285kB/s)(2816KiB/10116msec) 00:35:17.115 slat (nsec): min=4047, max=90173, avg=36858.00, stdev=26915.45 00:35:17.115 clat (msec): min=143, max=372, avg=229.12, stdev=35.39 00:35:17.115 lat (msec): min=143, max=372, avg=229.16, stdev=35.40 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 144], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 207], 00:35:17.115 | 30.00th=[ 211], 40.00th=[ 215], 50.00th=[ 222], 60.00th=[ 230], 00:35:17.115 | 70.00th=[ 241], 80.00th=[ 264], 90.00th=[ 288], 95.00th=[ 292], 00:35:17.115 | 99.00th=[ 296], 99.50th=[ 359], 99.90th=[ 372], 99.95th=[ 372], 00:35:17.115 | 99.99th=[ 372] 00:35:17.115 bw ( KiB/s): min= 128, max= 384, per=4.24%, avg=275.20, stdev=65.37, samples=20 00:35:17.115 iops : min= 32, max= 96, avg=68.80, stdev=16.34, samples=20 00:35:17.115 lat (msec) : 250=73.58%, 500=26.42% 00:35:17.115 cpu : usr=98.13%, sys=1.47%, ctx=18, majf=0, minf=9 00:35:17.115 IO depths : 1=4.0%, 2=9.8%, 4=23.7%, 8=54.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename0: (groupid=0, jobs=1): err= 0: pid=1275914: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=63, BW=256KiB/s (262kB/s)(2584KiB/10099msec) 00:35:17.115 slat (nsec): min=8219, max=97167, avg=40534.28, stdev=27270.64 00:35:17.115 clat (msec): min=147, max=440, avg=249.06, stdev=43.68 00:35:17.115 lat (msec): min=147, max=440, avg=249.10, stdev=43.70 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 153], 5.00th=[ 203], 10.00th=[ 205], 20.00th=[ 211], 00:35:17.115 | 30.00th=[ 224], 40.00th=[ 230], 50.00th=[ 236], 60.00th=[ 262], 00:35:17.115 | 70.00th=[ 275], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 330], 00:35:17.115 | 99.00th=[ 334], 99.50th=[ 405], 99.90th=[ 439], 99.95th=[ 439], 00:35:17.115 | 99.99th=[ 439] 00:35:17.115 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=252.00, stdev=76.09, samples=20 00:35:17.115 iops : min= 32, max= 96, avg=63.00, stdev=19.02, samples=20 00:35:17.115 lat (msec) : 250=56.97%, 500=43.03% 00:35:17.115 cpu : usr=97.98%, sys=1.60%, ctx=17, majf=0, minf=9 00:35:17.115 IO depths : 1=3.9%, 2=9.8%, 4=23.8%, 8=53.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename0: (groupid=0, jobs=1): err= 0: pid=1275915: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=80, BW=323KiB/s (330kB/s)(3264KiB/10116msec) 00:35:17.115 slat (usec): min=8, max=323, avg=21.83, stdev=28.40 00:35:17.115 clat (msec): min=138, max=265, avg=197.44, stdev=24.84 00:35:17.115 lat (msec): min=138, max=265, avg=197.46, stdev=24.84 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 140], 5.00th=[ 144], 10.00th=[ 163], 20.00th=[ 178], 00:35:17.115 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 205], 60.00th=[ 207], 00:35:17.115 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 226], 95.00th=[ 230], 00:35:17.115 | 99.00th=[ 234], 99.50th=[ 255], 99.90th=[ 266], 99.95th=[ 266], 00:35:17.115 | 99.99th=[ 266] 00:35:17.115 bw ( KiB/s): min= 256, max= 384, per=4.93%, avg=320.00, stdev=59.64, samples=20 00:35:17.115 iops : min= 64, max= 96, avg=80.00, stdev=14.91, samples=20 00:35:17.115 lat (msec) : 250=99.26%, 500=0.74% 00:35:17.115 cpu : usr=97.18%, sys=1.91%, ctx=94, majf=0, minf=9 00:35:17.115 IO depths : 1=0.7%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename0: (groupid=0, jobs=1): err= 0: pid=1275916: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=86, BW=346KiB/s (355kB/s)(3504KiB/10119msec) 00:35:17.115 slat (usec): min=4, max=100, avg=45.46, stdev=26.23 00:35:17.115 clat (msec): min=7, max=323, avg=183.75, stdev=49.40 00:35:17.115 lat (msec): min=7, max=323, avg=183.80, stdev=49.39 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 8], 5.00th=[ 70], 10.00th=[ 126], 20.00th=[ 148], 00:35:17.115 | 30.00th=[ 171], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 205], 00:35:17.115 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 234], 00:35:17.115 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:35:17.115 | 99.99th=[ 326] 00:35:17.115 bw ( KiB/s): min= 224, max= 512, per=5.28%, avg=344.00, stdev=78.81, samples=20 00:35:17.115 iops : min= 56, max= 128, avg=86.00, stdev=19.70, samples=20 00:35:17.115 lat (msec) : 10=1.83%, 100=4.34%, 250=90.41%, 500=3.42% 00:35:17.115 cpu : usr=97.53%, sys=1.70%, ctx=46, majf=0, minf=9 00:35:17.115 IO depths : 1=0.5%, 2=1.4%, 4=8.6%, 8=77.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=89.3%, 8=5.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename0: (groupid=0, jobs=1): err= 0: pid=1275917: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=62, BW=250KiB/s (256kB/s)(2520KiB/10090msec) 00:35:17.115 slat (nsec): min=5878, max=60001, avg=15292.57, stdev=9206.18 00:35:17.115 clat (msec): min=129, max=442, avg=256.09, stdev=51.79 00:35:17.115 lat (msec): min=129, max=442, avg=256.11, stdev=51.79 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 130], 5.00th=[ 165], 10.00th=[ 207], 20.00th=[ 213], 00:35:17.115 | 30.00th=[ 220], 40.00th=[ 236], 50.00th=[ 255], 60.00th=[ 271], 00:35:17.115 | 70.00th=[ 288], 80.00th=[ 309], 90.00th=[ 326], 95.00th=[ 342], 00:35:17.115 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 443], 99.95th=[ 443], 00:35:17.115 | 99.99th=[ 443] 00:35:17.115 bw ( KiB/s): min= 128, max= 384, per=3.77%, avg=245.60, stdev=71.98, samples=20 00:35:17.115 iops : min= 32, max= 96, avg=61.40, stdev=18.00, samples=20 00:35:17.115 lat (msec) : 250=49.21%, 500=50.79% 00:35:17.115 cpu : usr=97.84%, sys=1.80%, ctx=16, majf=0, minf=9 00:35:17.115 IO depths : 1=4.9%, 2=10.5%, 4=22.9%, 8=54.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename1: (groupid=0, jobs=1): err= 0: pid=1275918: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10057msec) 00:35:17.115 slat (nsec): min=8208, max=96174, avg=43514.40, stdev=27159.91 00:35:17.115 clat (msec): min=193, max=445, avg=278.95, stdev=51.37 00:35:17.115 lat (msec): min=193, max=445, avg=279.00, stdev=51.35 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 194], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 218], 00:35:17.115 | 30.00th=[ 249], 40.00th=[ 266], 50.00th=[ 279], 60.00th=[ 305], 00:35:17.115 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 338], 00:35:17.115 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:35:17.115 | 99.99th=[ 447] 00:35:17.115 bw ( KiB/s): min= 128, max= 384, per=3.43%, avg=224.00, stdev=70.42, samples=20 00:35:17.115 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:35:17.115 lat (msec) : 250=31.94%, 500=68.06% 00:35:17.115 cpu : usr=98.27%, sys=1.33%, ctx=29, majf=0, minf=9 00:35:17.115 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename1: (groupid=0, jobs=1): err= 0: pid=1275919: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=56, BW=228KiB/s (233kB/s)(2296KiB/10083msec) 00:35:17.115 slat (nsec): min=8466, max=84208, avg=26876.60, stdev=18003.06 00:35:17.115 clat (msec): min=131, max=453, avg=280.72, stdev=57.97 00:35:17.115 lat (msec): min=131, max=453, avg=280.75, stdev=57.96 00:35:17.115 clat percentiles (msec): 00:35:17.115 | 1.00th=[ 132], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 222], 00:35:17.115 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 296], 00:35:17.115 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 342], 00:35:17.115 | 99.00th=[ 443], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:35:17.115 | 99.99th=[ 456] 00:35:17.115 bw ( KiB/s): min= 127, max= 384, per=3.43%, avg=223.15, stdev=68.84, samples=20 00:35:17.115 iops : min= 31, max= 96, avg=55.75, stdev=17.27, samples=20 00:35:17.115 lat (msec) : 250=29.62%, 500=70.38% 00:35:17.115 cpu : usr=97.68%, sys=1.85%, ctx=23, majf=0, minf=9 00:35:17.115 IO depths : 1=4.4%, 2=10.6%, 4=25.1%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:17.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.115 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.115 filename1: (groupid=0, jobs=1): err= 0: pid=1275920: Sat Jul 13 15:46:46 2024 00:35:17.115 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10090msec) 00:35:17.115 slat (nsec): min=5730, max=57247, avg=17481.81, stdev=9990.95 00:35:17.115 clat (msec): min=141, max=433, avg=251.56, stdev=50.67 00:35:17.116 lat (msec): min=141, max=433, avg=251.58, stdev=50.67 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 148], 5.00th=[ 180], 10.00th=[ 205], 20.00th=[ 211], 00:35:17.116 | 30.00th=[ 222], 40.00th=[ 230], 50.00th=[ 236], 60.00th=[ 259], 00:35:17.116 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 326], 95.00th=[ 334], 00:35:17.116 | 99.00th=[ 380], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 435], 00:35:17.116 | 99.99th=[ 435] 00:35:17.116 bw ( KiB/s): min= 128, max= 384, per=3.83%, avg=249.60, stdev=77.24, samples=20 00:35:17.116 iops : min= 32, max= 96, avg=62.40, stdev=19.31, samples=20 00:35:17.116 lat (msec) : 250=55.94%, 500=44.06% 00:35:17.116 cpu : usr=98.00%, sys=1.66%, ctx=16, majf=0, minf=9 00:35:17.116 IO depths : 1=2.7%, 2=8.8%, 4=24.5%, 8=54.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename1: (groupid=0, jobs=1): err= 0: pid=1275921: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=74, BW=298KiB/s (305kB/s)(3008KiB/10099msec) 00:35:17.116 slat (usec): min=12, max=272, avg=56.45, stdev=24.58 00:35:17.116 clat (msec): min=125, max=305, avg=214.01, stdev=34.52 00:35:17.116 lat (msec): min=125, max=305, avg=214.07, stdev=34.53 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 126], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 194], 00:35:17.116 | 30.00th=[ 203], 40.00th=[ 211], 50.00th=[ 213], 60.00th=[ 220], 00:35:17.116 | 70.00th=[ 226], 80.00th=[ 232], 90.00th=[ 268], 95.00th=[ 271], 00:35:17.116 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:35:17.116 | 99.99th=[ 305] 00:35:17.116 bw ( KiB/s): min= 256, max= 384, per=4.53%, avg=294.40, stdev=53.29, samples=20 00:35:17.116 iops : min= 64, max= 96, avg=73.60, stdev=13.32, samples=20 00:35:17.116 lat (msec) : 250=85.37%, 500=14.63% 00:35:17.116 cpu : usr=96.54%, sys=2.27%, ctx=38, majf=0, minf=9 00:35:17.116 IO depths : 1=3.2%, 2=8.2%, 4=21.4%, 8=57.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename1: (groupid=0, jobs=1): err= 0: pid=1275922: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=56, BW=228KiB/s (233kB/s)(2296KiB/10084msec) 00:35:17.116 slat (nsec): min=24976, max=97661, avg=64404.98, stdev=10558.74 00:35:17.116 clat (msec): min=125, max=489, avg=280.46, stdev=63.75 00:35:17.116 lat (msec): min=126, max=489, avg=280.53, stdev=63.75 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 130], 5.00th=[ 190], 10.00th=[ 207], 20.00th=[ 220], 00:35:17.116 | 30.00th=[ 243], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 296], 00:35:17.116 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 405], 00:35:17.116 | 99.00th=[ 451], 99.50th=[ 460], 99.90th=[ 489], 99.95th=[ 489], 00:35:17.116 | 99.99th=[ 489] 00:35:17.116 bw ( KiB/s): min= 128, max= 384, per=3.43%, avg=223.20, stdev=68.76, samples=20 00:35:17.116 iops : min= 32, max= 96, avg=55.80, stdev=17.19, samples=20 00:35:17.116 lat (msec) : 250=30.31%, 500=69.69% 00:35:17.116 cpu : usr=97.19%, sys=1.83%, ctx=132, majf=0, minf=9 00:35:17.116 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename1: (groupid=0, jobs=1): err= 0: pid=1275923: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10075msec) 00:35:17.116 slat (usec): min=12, max=185, avg=68.66, stdev=20.66 00:35:17.116 clat (msec): min=201, max=349, avg=279.22, stdev=46.08 00:35:17.116 lat (msec): min=201, max=349, avg=279.29, stdev=46.08 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 203], 5.00th=[ 205], 10.00th=[ 207], 20.00th=[ 236], 00:35:17.116 | 30.00th=[ 257], 40.00th=[ 268], 50.00th=[ 275], 60.00th=[ 296], 00:35:17.116 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 338], 00:35:17.116 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:35:17.116 | 99.99th=[ 351] 00:35:17.116 bw ( KiB/s): min= 128, max= 384, per=3.45%, avg=224.00, stdev=81.75, samples=20 00:35:17.116 iops : min= 32, max= 96, avg=56.00, stdev=20.44, samples=20 00:35:17.116 lat (msec) : 250=27.78%, 500=72.22% 00:35:17.116 cpu : usr=94.60%, sys=2.88%, ctx=142, majf=0, minf=9 00:35:17.116 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename1: (groupid=0, jobs=1): err= 0: pid=1275924: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=82, BW=331KiB/s (338kB/s)(3344KiB/10118msec) 00:35:17.116 slat (usec): min=4, max=138, avg=60.46, stdev=20.24 00:35:17.116 clat (msec): min=7, max=311, avg=193.12, stdev=47.42 00:35:17.116 lat (msec): min=7, max=312, avg=193.18, stdev=47.43 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 8], 5.00th=[ 69], 10.00th=[ 136], 20.00th=[ 178], 00:35:17.116 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 207], 60.00th=[ 213], 00:35:17.116 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 228], 95.00th=[ 239], 00:35:17.116 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 313], 99.95th=[ 313], 00:35:17.116 | 99.99th=[ 313] 00:35:17.116 bw ( KiB/s): min= 224, max= 625, per=5.05%, avg=328.05, stdev=85.40, samples=20 00:35:17.116 iops : min= 56, max= 156, avg=82.00, stdev=21.30, samples=20 00:35:17.116 lat (msec) : 10=1.91%, 100=3.83%, 250=89.71%, 500=4.55% 00:35:17.116 cpu : usr=96.16%, sys=2.32%, ctx=49, majf=0, minf=9 00:35:17.116 IO depths : 1=0.8%, 2=2.8%, 4=11.7%, 8=72.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=90.3%, 8=4.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename1: (groupid=0, jobs=1): err= 0: pid=1275925: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10099msec) 00:35:17.116 slat (usec): min=8, max=113, avg=43.06, stdev=28.76 00:35:17.116 clat (msec): min=125, max=421, avg=245.45, stdev=43.11 00:35:17.116 lat (msec): min=125, max=421, avg=245.49, stdev=43.12 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 159], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 213], 00:35:17.116 | 30.00th=[ 220], 40.00th=[ 222], 50.00th=[ 230], 60.00th=[ 241], 00:35:17.116 | 70.00th=[ 266], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 321], 00:35:17.116 | 99.00th=[ 363], 99.50th=[ 405], 99.90th=[ 422], 99.95th=[ 422], 00:35:17.116 | 99.99th=[ 422] 00:35:17.116 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=256.00, stdev=57.57, samples=20 00:35:17.116 iops : min= 32, max= 96, avg=64.00, stdev=14.39, samples=20 00:35:17.116 lat (msec) : 250=61.59%, 500=38.41% 00:35:17.116 cpu : usr=97.28%, sys=1.81%, ctx=41, majf=0, minf=9 00:35:17.116 IO depths : 1=2.3%, 2=8.5%, 4=25.0%, 8=54.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename2: (groupid=0, jobs=1): err= 0: pid=1275926: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=68, BW=273KiB/s (280kB/s)(2752KiB/10074msec) 00:35:17.116 slat (nsec): min=8128, max=82475, avg=20779.24, stdev=13694.45 00:35:17.116 clat (msec): min=147, max=358, avg=234.11, stdev=37.25 00:35:17.116 lat (msec): min=147, max=358, avg=234.13, stdev=37.25 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 197], 20.00th=[ 205], 00:35:17.116 | 30.00th=[ 211], 40.00th=[ 215], 50.00th=[ 226], 60.00th=[ 230], 00:35:17.116 | 70.00th=[ 249], 80.00th=[ 266], 90.00th=[ 288], 95.00th=[ 300], 00:35:17.116 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:35:17.116 | 99.99th=[ 359] 00:35:17.116 bw ( KiB/s): min= 128, max= 384, per=4.13%, avg=268.80, stdev=54.10, samples=20 00:35:17.116 iops : min= 32, max= 96, avg=67.20, stdev=13.52, samples=20 00:35:17.116 lat (msec) : 250=71.80%, 500=28.20% 00:35:17.116 cpu : usr=97.59%, sys=1.75%, ctx=53, majf=0, minf=9 00:35:17.116 IO depths : 1=2.2%, 2=8.3%, 4=24.6%, 8=54.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:35:17.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.116 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.116 filename2: (groupid=0, jobs=1): err= 0: pid=1275927: Sat Jul 13 15:46:46 2024 00:35:17.116 read: IOPS=77, BW=311KiB/s (318kB/s)(3136KiB/10099msec) 00:35:17.116 slat (usec): min=11, max=106, avg=39.43, stdev=24.02 00:35:17.116 clat (msec): min=125, max=336, avg=205.34, stdev=44.84 00:35:17.116 lat (msec): min=125, max=336, avg=205.38, stdev=44.85 00:35:17.116 clat percentiles (msec): 00:35:17.116 | 1.00th=[ 127], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 171], 00:35:17.117 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 205], 60.00th=[ 211], 00:35:17.117 | 70.00th=[ 215], 80.00th=[ 226], 90.00th=[ 239], 95.00th=[ 326], 00:35:17.117 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:35:17.117 | 99.99th=[ 338] 00:35:17.117 bw ( KiB/s): min= 128, max= 496, per=4.73%, avg=307.20, stdev=82.50, samples=20 00:35:17.117 iops : min= 32, max= 124, avg=76.80, stdev=20.63, samples=20 00:35:17.117 lat (msec) : 250=91.58%, 500=8.42% 00:35:17.117 cpu : usr=98.00%, sys=1.53%, ctx=47, majf=0, minf=9 00:35:17.117 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 filename2: (groupid=0, jobs=1): err= 0: pid=1275928: Sat Jul 13 15:46:46 2024 00:35:17.117 read: IOPS=66, BW=266KiB/s (273kB/s)(2688KiB/10091msec) 00:35:17.117 slat (usec): min=8, max=355, avg=27.77, stdev=28.86 00:35:17.117 clat (msec): min=162, max=368, avg=239.56, stdev=41.11 00:35:17.117 lat (msec): min=162, max=368, avg=239.59, stdev=41.11 00:35:17.117 clat percentiles (msec): 00:35:17.117 | 1.00th=[ 163], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 209], 00:35:17.117 | 30.00th=[ 213], 40.00th=[ 218], 50.00th=[ 230], 60.00th=[ 236], 00:35:17.117 | 70.00th=[ 262], 80.00th=[ 275], 90.00th=[ 296], 95.00th=[ 321], 00:35:17.117 | 99.00th=[ 342], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:35:17.117 | 99.99th=[ 368] 00:35:17.117 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=262.40, stdev=77.76, samples=20 00:35:17.117 iops : min= 32, max= 96, avg=65.60, stdev=19.44, samples=20 00:35:17.117 lat (msec) : 250=67.86%, 500=32.14% 00:35:17.117 cpu : usr=95.95%, sys=2.50%, ctx=111, majf=0, minf=9 00:35:17.117 IO depths : 1=4.5%, 2=10.3%, 4=23.5%, 8=53.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 filename2: (groupid=0, jobs=1): err= 0: pid=1275929: Sat Jul 13 15:46:46 2024 00:35:17.117 read: IOPS=85, BW=344KiB/s (352kB/s)(3480KiB/10119msec) 00:35:17.117 slat (usec): min=5, max=169, avg=57.55, stdev=21.50 00:35:17.117 clat (msec): min=6, max=337, avg=185.30, stdev=50.91 00:35:17.117 lat (msec): min=6, max=337, avg=185.35, stdev=50.92 00:35:17.117 clat percentiles (msec): 00:35:17.117 | 1.00th=[ 7], 5.00th=[ 70], 10.00th=[ 123], 20.00th=[ 148], 00:35:17.117 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 205], 00:35:17.117 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 230], 95.00th=[ 266], 00:35:17.117 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 338], 00:35:17.117 | 99.99th=[ 338] 00:35:17.117 bw ( KiB/s): min= 256, max= 641, per=5.25%, avg=341.65, stdev=87.70, samples=20 00:35:17.117 iops : min= 64, max= 160, avg=85.40, stdev=21.88, samples=20 00:35:17.117 lat (msec) : 10=1.61%, 50=0.23%, 100=3.68%, 250=89.43%, 500=5.06% 00:35:17.117 cpu : usr=96.43%, sys=2.16%, ctx=120, majf=0, minf=9 00:35:17.117 IO depths : 1=0.5%, 2=1.8%, 4=9.9%, 8=75.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=89.8%, 8=5.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 filename2: (groupid=0, jobs=1): err= 0: pid=1275930: Sat Jul 13 15:46:46 2024 00:35:17.117 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10089msec) 00:35:17.117 slat (nsec): min=4165, max=51807, avg=22221.67, stdev=5434.81 00:35:17.117 clat (msec): min=118, max=443, avg=280.05, stdev=55.28 00:35:17.117 lat (msec): min=118, max=443, avg=280.07, stdev=55.28 00:35:17.117 clat percentiles (msec): 00:35:17.117 | 1.00th=[ 140], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 218], 00:35:17.117 | 30.00th=[ 251], 40.00th=[ 266], 50.00th=[ 284], 60.00th=[ 296], 00:35:17.117 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 351], 00:35:17.117 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 443], 99.95th=[ 443], 00:35:17.117 | 99.99th=[ 443] 00:35:17.117 bw ( KiB/s): min= 128, max= 368, per=3.43%, avg=224.00, stdev=67.68, samples=20 00:35:17.117 iops : min= 32, max= 92, avg=56.00, stdev=16.92, samples=20 00:35:17.117 lat (msec) : 250=29.86%, 500=70.14% 00:35:17.117 cpu : usr=97.57%, sys=1.78%, ctx=11, majf=0, minf=9 00:35:17.117 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 filename2: (groupid=0, jobs=1): err= 0: pid=1275931: Sat Jul 13 15:46:46 2024 00:35:17.117 read: IOPS=80, BW=322KiB/s (330kB/s)(3256KiB/10099msec) 00:35:17.117 slat (nsec): min=8136, max=39413, avg=11316.86, stdev=4464.99 00:35:17.117 clat (msec): min=120, max=303, avg=197.60, stdev=32.39 00:35:17.117 lat (msec): min=120, max=303, avg=197.61, stdev=32.39 00:35:17.117 clat percentiles (msec): 00:35:17.117 | 1.00th=[ 122], 5.00th=[ 133], 10.00th=[ 146], 20.00th=[ 180], 00:35:17.117 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 207], 60.00th=[ 213], 00:35:17.117 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 230], 00:35:17.117 | 99.00th=[ 284], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:35:17.117 | 99.99th=[ 305] 00:35:17.117 bw ( KiB/s): min= 256, max= 496, per=4.91%, avg=319.20, stdev=61.09, samples=20 00:35:17.117 iops : min= 64, max= 124, avg=79.80, stdev=15.27, samples=20 00:35:17.117 lat (msec) : 250=96.56%, 500=3.44% 00:35:17.117 cpu : usr=97.84%, sys=1.70%, ctx=15, majf=0, minf=9 00:35:17.117 IO depths : 1=0.5%, 2=2.2%, 4=11.2%, 8=74.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=90.2%, 8=4.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 filename2: (groupid=0, jobs=1): err= 0: pid=1275932: Sat Jul 13 15:46:46 2024 00:35:17.117 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10081msec) 00:35:17.117 slat (usec): min=21, max=164, avg=69.61, stdev=14.04 00:35:17.117 clat (msec): min=201, max=348, avg=279.41, stdev=45.51 00:35:17.117 lat (msec): min=201, max=348, avg=279.48, stdev=45.52 00:35:17.117 clat percentiles (msec): 00:35:17.117 | 1.00th=[ 203], 5.00th=[ 207], 10.00th=[ 209], 20.00th=[ 236], 00:35:17.117 | 30.00th=[ 249], 40.00th=[ 266], 50.00th=[ 284], 60.00th=[ 296], 00:35:17.117 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 338], 00:35:17.117 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:35:17.117 | 99.99th=[ 351] 00:35:17.117 bw ( KiB/s): min= 128, max= 384, per=3.43%, avg=224.00, stdev=70.42, samples=20 00:35:17.117 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:35:17.117 lat (msec) : 250=30.56%, 500=69.44% 00:35:17.117 cpu : usr=95.41%, sys=2.70%, ctx=100, majf=0, minf=9 00:35:17.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 filename2: (groupid=0, jobs=1): err= 0: pid=1275933: Sat Jul 13 15:46:46 2024 00:35:17.117 read: IOPS=78, BW=313KiB/s (321kB/s)(3168KiB/10118msec) 00:35:17.117 slat (usec): min=4, max=346, avg=66.19, stdev=22.89 00:35:17.117 clat (msec): min=66, max=347, avg=203.53, stdev=39.24 00:35:17.117 lat (msec): min=66, max=347, avg=203.60, stdev=39.24 00:35:17.117 clat percentiles (msec): 00:35:17.117 | 1.00th=[ 68], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 180], 00:35:17.117 | 30.00th=[ 188], 40.00th=[ 199], 50.00th=[ 207], 60.00th=[ 211], 00:35:17.117 | 70.00th=[ 220], 80.00th=[ 224], 90.00th=[ 243], 95.00th=[ 255], 00:35:17.117 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 347], 00:35:17.117 | 99.99th=[ 347] 00:35:17.117 bw ( KiB/s): min= 256, max= 384, per=4.77%, avg=310.40, stdev=49.90, samples=20 00:35:17.117 iops : min= 64, max= 96, avg=77.60, stdev=12.47, samples=20 00:35:17.117 lat (msec) : 100=4.04%, 250=90.91%, 500=5.05% 00:35:17.117 cpu : usr=97.06%, sys=1.85%, ctx=65, majf=0, minf=9 00:35:17.117 IO depths : 1=1.5%, 2=3.7%, 4=12.5%, 8=71.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:35:17.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 complete : 0=0.0%, 4=90.5%, 8=4.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.117 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:17.117 00:35:17.117 Run status group 0 (all jobs): 00:35:17.117 READ: bw=6493KiB/s (6649kB/s), 228KiB/s-346KiB/s (233kB/s-355kB/s), io=64.2MiB (67.3MB), run=10057-10119msec 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:17.117 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 bdev_null0 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 [2024-07-13 15:46:46.408716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 bdev_null1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.118 { 00:35:17.118 "params": { 00:35:17.118 "name": "Nvme$subsystem", 00:35:17.118 "trtype": "$TEST_TRANSPORT", 00:35:17.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.118 "adrfam": "ipv4", 00:35:17.118 "trsvcid": "$NVMF_PORT", 00:35:17.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.118 "hdgst": ${hdgst:-false}, 00:35:17.118 "ddgst": ${ddgst:-false} 00:35:17.118 }, 00:35:17.118 "method": "bdev_nvme_attach_controller" 00:35:17.118 } 00:35:17.118 EOF 00:35:17.118 )") 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.118 { 00:35:17.118 "params": { 00:35:17.118 "name": "Nvme$subsystem", 00:35:17.118 "trtype": "$TEST_TRANSPORT", 00:35:17.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.118 "adrfam": "ipv4", 00:35:17.118 "trsvcid": "$NVMF_PORT", 00:35:17.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.118 "hdgst": ${hdgst:-false}, 00:35:17.118 "ddgst": ${ddgst:-false} 00:35:17.118 }, 00:35:17.118 "method": "bdev_nvme_attach_controller" 00:35:17.118 } 00:35:17.118 EOF 00:35:17.118 )") 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:17.118 15:46:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:17.118 "params": { 00:35:17.118 "name": "Nvme0", 00:35:17.118 "trtype": "tcp", 00:35:17.118 "traddr": "10.0.0.2", 00:35:17.118 "adrfam": "ipv4", 00:35:17.118 "trsvcid": "4420", 00:35:17.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:17.118 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:17.118 "hdgst": false, 00:35:17.119 "ddgst": false 00:35:17.119 }, 00:35:17.119 "method": "bdev_nvme_attach_controller" 00:35:17.119 },{ 00:35:17.119 "params": { 00:35:17.119 "name": "Nvme1", 00:35:17.119 "trtype": "tcp", 00:35:17.119 "traddr": "10.0.0.2", 00:35:17.119 "adrfam": "ipv4", 00:35:17.119 "trsvcid": "4420", 00:35:17.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:17.119 "hdgst": false, 00:35:17.119 "ddgst": false 00:35:17.119 }, 00:35:17.119 "method": "bdev_nvme_attach_controller" 00:35:17.119 }' 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:17.119 15:46:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.119 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:17.119 ... 00:35:17.119 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:17.119 ... 00:35:17.119 fio-3.35 00:35:17.119 Starting 4 threads 00:35:17.119 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.382 00:35:22.382 filename0: (groupid=0, jobs=1): err= 0: pid=1277312: Sat Jul 13 15:46:52 2024 00:35:22.382 read: IOPS=2020, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:35:22.382 slat (nsec): min=4230, max=36306, avg=13916.05, stdev=3123.97 00:35:22.382 clat (usec): min=1042, max=8337, avg=3913.50, stdev=350.68 00:35:22.382 lat (usec): min=1056, max=8348, avg=3927.42, stdev=350.54 00:35:22.382 clat percentiles (usec): 00:35:22.382 | 1.00th=[ 3326], 5.00th=[ 3687], 10.00th=[ 3720], 20.00th=[ 3752], 00:35:22.382 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3818], 60.00th=[ 3884], 00:35:22.382 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4080], 95.00th=[ 4359], 00:35:22.382 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6390], 99.95th=[ 8160], 00:35:22.382 | 99.99th=[ 8160] 00:35:22.382 bw ( KiB/s): min=15216, max=16384, per=25.80%, avg=16164.80, stdev=377.73, samples=10 00:35:22.382 iops : min= 1902, max= 2048, avg=2020.60, stdev=47.22, samples=10 00:35:22.382 lat (msec) : 2=0.02%, 4=72.66%, 10=27.32% 00:35:22.382 cpu : usr=93.50%, sys=5.92%, ctx=8, majf=0, minf=9 00:35:22.382 IO depths : 1=0.5%, 2=8.9%, 4=59.5%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 issued rwts: total=10108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.382 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:22.382 filename0: (groupid=0, jobs=1): err= 0: pid=1277313: Sat Jul 13 15:46:52 2024 00:35:22.382 read: IOPS=2037, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5003msec) 00:35:22.382 slat (nsec): min=3981, max=24095, avg=10305.15, stdev=3004.41 00:35:22.382 clat (usec): min=1656, max=7308, avg=3885.58, stdev=338.48 00:35:22.382 lat (usec): min=1670, max=7322, avg=3895.89, stdev=338.43 00:35:22.382 clat percentiles (usec): 00:35:22.382 | 1.00th=[ 3130], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3752], 00:35:22.382 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:35:22.382 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4113], 00:35:22.382 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6390], 99.95th=[ 7111], 00:35:22.382 | 99.99th=[ 7242] 00:35:22.382 bw ( KiB/s): min=15712, max=16464, per=26.02%, avg=16302.40, stdev=214.59, samples=10 00:35:22.382 iops : min= 1964, max= 2058, avg=2037.80, stdev=26.82, samples=10 00:35:22.382 lat (msec) : 2=0.04%, 4=74.03%, 10=25.93% 00:35:22.382 cpu : usr=93.94%, sys=5.52%, ctx=13, majf=0, minf=0 00:35:22.382 IO depths : 1=0.1%, 2=20.3%, 4=53.8%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 issued rwts: total=10194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.382 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:22.382 filename1: (groupid=0, jobs=1): err= 0: pid=1277314: Sat Jul 13 15:46:52 2024 00:35:22.382 read: IOPS=1886, BW=14.7MiB/s (15.5MB/s)(73.7MiB/5002msec) 00:35:22.382 slat (nsec): min=3993, max=35524, avg=11428.49, stdev=3592.35 00:35:22.382 clat (usec): min=1353, max=6843, avg=4206.96, stdev=733.76 00:35:22.382 lat (usec): min=1367, max=6852, avg=4218.39, stdev=732.64 00:35:22.382 clat percentiles (usec): 00:35:22.382 | 1.00th=[ 3359], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3785], 00:35:22.382 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3982], 00:35:22.382 | 70.00th=[ 4015], 80.00th=[ 4359], 90.00th=[ 5669], 95.00th=[ 5932], 00:35:22.382 | 99.00th=[ 6063], 99.50th=[ 6063], 99.90th=[ 6259], 99.95th=[ 6456], 00:35:22.382 | 99.99th=[ 6849] 00:35:22.382 bw ( KiB/s): min=14960, max=15712, per=24.08%, avg=15089.40, stdev=227.73, samples=10 00:35:22.382 iops : min= 1870, max= 1964, avg=1886.10, stdev=28.47, samples=10 00:35:22.382 lat (msec) : 2=0.06%, 4=64.78%, 10=35.16% 00:35:22.382 cpu : usr=94.72%, sys=4.80%, ctx=15, majf=0, minf=9 00:35:22.382 IO depths : 1=0.1%, 2=0.9%, 4=71.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 issued rwts: total=9434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.382 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:22.382 filename1: (groupid=0, jobs=1): err= 0: pid=1277315: Sat Jul 13 15:46:52 2024 00:35:22.382 read: IOPS=1889, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5003msec) 00:35:22.382 slat (nsec): min=3969, max=28845, avg=11108.49, stdev=3264.17 00:35:22.382 clat (usec): min=2942, max=7601, avg=4200.72, stdev=755.77 00:35:22.382 lat (usec): min=2950, max=7615, avg=4211.83, stdev=755.89 00:35:22.382 clat percentiles (usec): 00:35:22.382 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3687], 00:35:22.382 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 3982], 00:35:22.382 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 5669], 95.00th=[ 5932], 00:35:22.382 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6849], 99.95th=[ 7046], 00:35:22.382 | 99.99th=[ 7570] 00:35:22.382 bw ( KiB/s): min=14912, max=15616, per=24.12%, avg=15113.40, stdev=210.08, samples=10 00:35:22.382 iops : min= 1864, max= 1952, avg=1889.10, stdev=26.24, samples=10 00:35:22.382 lat (msec) : 4=61.15%, 10=38.85% 00:35:22.382 cpu : usr=94.12%, sys=5.38%, ctx=24, majf=0, minf=0 00:35:22.382 IO depths : 1=0.1%, 2=0.2%, 4=72.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.382 issued rwts: total=9452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.382 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:22.382 00:35:22.382 Run status group 0 (all jobs): 00:35:22.382 READ: bw=61.2MiB/s (64.2MB/s), 14.7MiB/s-15.9MiB/s (15.5MB/s-16.7MB/s), io=306MiB (321MB), run=5002-5003msec 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 00:35:22.382 real 0m24.245s 00:35:22.382 user 4m32.359s 00:35:22.382 sys 0m7.842s 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 ************************************ 00:35:22.382 END TEST fio_dif_rand_params 00:35:22.382 ************************************ 00:35:22.382 15:46:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:22.382 15:46:52 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:22.382 15:46:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:22.382 15:46:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 ************************************ 00:35:22.382 START TEST fio_dif_digest 00:35:22.382 ************************************ 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 bdev_null0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.382 [2024-07-13 15:46:52.873392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.382 { 00:35:22.382 "params": { 00:35:22.382 "name": "Nvme$subsystem", 00:35:22.382 "trtype": "$TEST_TRANSPORT", 00:35:22.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.382 "adrfam": "ipv4", 00:35:22.382 "trsvcid": "$NVMF_PORT", 00:35:22.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.382 "hdgst": ${hdgst:-false}, 00:35:22.382 "ddgst": ${ddgst:-false} 00:35:22.382 }, 00:35:22.382 "method": "bdev_nvme_attach_controller" 00:35:22.382 } 00:35:22.382 EOF 00:35:22.382 )") 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:22.382 15:46:52 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:22.382 "params": { 00:35:22.382 "name": "Nvme0", 00:35:22.382 "trtype": "tcp", 00:35:22.382 "traddr": "10.0.0.2", 00:35:22.383 "adrfam": "ipv4", 00:35:22.383 "trsvcid": "4420", 00:35:22.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.383 "hdgst": true, 00:35:22.383 "ddgst": true 00:35:22.383 }, 00:35:22.383 "method": "bdev_nvme_attach_controller" 00:35:22.383 }' 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.383 15:46:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.383 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:22.383 ... 00:35:22.383 fio-3.35 00:35:22.383 Starting 3 threads 00:35:22.640 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.847 00:35:34.847 filename0: (groupid=0, jobs=1): err= 0: pid=1278192: Sat Jul 13 15:47:03 2024 00:35:34.847 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(298MiB/10046msec) 00:35:34.847 slat (nsec): min=4689, max=32637, avg=18143.48, stdev=2223.68 00:35:34.847 clat (usec): min=6410, max=55171, avg=12590.30, stdev=2463.67 00:35:34.847 lat (usec): min=6425, max=55190, avg=12608.44, stdev=2463.79 00:35:34.847 clat percentiles (usec): 00:35:34.847 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10945], 00:35:34.847 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:35:34.847 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:35:34.847 | 99.00th=[15533], 99.50th=[15926], 99.90th=[52691], 99.95th=[53216], 00:35:34.847 | 99.99th=[55313] 00:35:34.847 bw ( KiB/s): min=27904, max=33280, per=38.51%, avg=30502.40, stdev=1497.97, samples=20 00:35:34.847 iops : min= 218, max= 260, avg=238.30, stdev=11.70, samples=20 00:35:34.847 lat (msec) : 10=12.57%, 20=87.09%, 50=0.17%, 100=0.17% 00:35:34.847 cpu : usr=90.85%, sys=8.45%, ctx=57, majf=0, minf=109 00:35:34.847 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.847 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.847 filename0: (groupid=0, jobs=1): err= 0: pid=1278193: Sat Jul 13 15:47:03 2024 00:35:34.847 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(258MiB/10047msec) 00:35:34.847 slat (nsec): min=4717, max=38031, avg=14790.78, stdev=2512.56 00:35:34.847 clat (usec): min=6429, max=58312, avg=14542.27, stdev=6816.24 00:35:34.847 lat (usec): min=6442, max=58332, avg=14557.06, stdev=6816.27 00:35:34.847 clat percentiles (usec): 00:35:34.847 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[12518], 00:35:34.847 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:35:34.847 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:35:34.847 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56886], 99.95th=[57934], 00:35:34.847 | 99.99th=[58459] 00:35:34.847 bw ( KiB/s): min=20224, max=29696, per=33.37%, avg=26432.00, stdev=2197.30, samples=20 00:35:34.847 iops : min= 158, max= 232, avg=206.50, stdev=17.17, samples=20 00:35:34.847 lat (msec) : 10=5.76%, 20=91.53%, 50=0.15%, 100=2.56% 00:35:34.847 cpu : usr=90.81%, sys=7.80%, ctx=237, majf=0, minf=174 00:35:34.847 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.847 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.847 filename0: (groupid=0, jobs=1): err= 0: pid=1278194: Sat Jul 13 15:47:03 2024 00:35:34.847 read: IOPS=175, BW=22.0MiB/s (23.0MB/s)(221MiB/10045msec) 00:35:34.847 slat (nsec): min=5192, max=34321, avg=14454.53, stdev=1523.81 00:35:34.847 clat (usec): min=8620, max=59976, avg=17041.82, stdev=8527.01 00:35:34.847 lat (usec): min=8634, max=59991, avg=17056.27, stdev=8527.09 00:35:34.847 clat percentiles (usec): 00:35:34.847 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[11600], 20.00th=[14353], 00:35:34.847 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15664], 60.00th=[16188], 00:35:34.847 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[19268], 00:35:34.847 | 99.00th=[57934], 99.50th=[58983], 99.90th=[60031], 99.95th=[60031], 00:35:34.847 | 99.99th=[60031] 00:35:34.847 bw ( KiB/s): min=16896, max=27136, per=28.46%, avg=22543.30, stdev=2520.12, samples=20 00:35:34.847 iops : min= 132, max= 212, avg=176.10, stdev=19.67, samples=20 00:35:34.847 lat (msec) : 10=1.93%, 20=93.48%, 50=0.40%, 100=4.20% 00:35:34.847 cpu : usr=92.08%, sys=7.14%, ctx=12, majf=0, minf=75 00:35:34.847 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.847 issued rwts: total=1764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:34.847 00:35:34.847 Run status group 0 (all jobs): 00:35:34.847 READ: bw=77.3MiB/s (81.1MB/s), 22.0MiB/s-29.7MiB/s (23.0MB/s-31.1MB/s), io=777MiB (815MB), run=10045-10047msec 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.847 00:35:34.847 real 0m11.129s 00:35:34.847 user 0m28.542s 00:35:34.847 sys 0m2.633s 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:34.847 15:47:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:34.847 ************************************ 00:35:34.847 END TEST fio_dif_digest 00:35:34.847 ************************************ 00:35:34.847 15:47:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:34.847 15:47:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:34.847 15:47:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:34.847 15:47:03 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:34.847 15:47:03 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:34.847 15:47:03 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:34.847 15:47:03 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:34.847 15:47:03 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:34.847 15:47:03 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:34.847 rmmod nvme_tcp 00:35:34.847 rmmod nvme_fabrics 00:35:34.847 rmmod nvme_keyring 00:35:34.847 15:47:04 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:34.847 15:47:04 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:34.848 15:47:04 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:34.848 15:47:04 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1272030 ']' 00:35:34.848 15:47:04 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1272030 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1272030 ']' 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1272030 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1272030 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1272030' 00:35:34.848 killing process with pid 1272030 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1272030 00:35:34.848 15:47:04 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1272030 00:35:34.848 15:47:04 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:34.848 15:47:04 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:34.848 Waiting for block devices as requested 00:35:34.848 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:34.848 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:34.848 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:35.106 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:35.106 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:35.106 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:35.106 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:35.365 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:35.365 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:35.365 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:35.365 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:35.624 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:35.624 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:35.624 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:35.882 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:35.882 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:35.882 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:35.882 15:47:06 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:35.882 15:47:06 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:35.882 15:47:06 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:35.882 15:47:06 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:35.882 15:47:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.882 15:47:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:35.882 15:47:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.413 15:47:08 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:38.413 00:35:38.413 real 1m6.478s 00:35:38.413 user 6m27.954s 00:35:38.413 sys 0m19.648s 00:35:38.413 15:47:08 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:38.413 15:47:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:38.413 ************************************ 00:35:38.413 END TEST nvmf_dif 00:35:38.413 ************************************ 00:35:38.413 15:47:08 -- common/autotest_common.sh@1142 -- # return 0 00:35:38.413 15:47:08 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:38.413 15:47:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:38.413 15:47:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.413 15:47:08 -- common/autotest_common.sh@10 -- # set +x 00:35:38.413 ************************************ 00:35:38.413 START TEST nvmf_abort_qd_sizes 00:35:38.413 ************************************ 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:38.414 * Looking for test storage... 00:35:38.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:38.414 15:47:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:39.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:39.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:39.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:39.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:39.836 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:40.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:40.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:35:40.095 00:35:40.095 --- 10.0.0.2 ping statistics --- 00:35:40.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.095 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:40.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:40.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:35:40.095 00:35:40.095 --- 10.0.0.1 ping statistics --- 00:35:40.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.095 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:40.095 15:47:10 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:41.473 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:41.473 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:41.473 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:42.411 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1282982 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1282982 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1282982 ']' 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:42.411 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.411 [2024-07-13 15:47:13.094408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:35:42.411 [2024-07-13 15:47:13.094480] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.411 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.411 [2024-07-13 15:47:13.134291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:42.411 [2024-07-13 15:47:13.166489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:42.670 [2024-07-13 15:47:13.260191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.670 [2024-07-13 15:47:13.260253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.670 [2024-07-13 15:47:13.260269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.670 [2024-07-13 15:47:13.260283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.670 [2024-07-13 15:47:13.260294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.670 [2024-07-13 15:47:13.260592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.670 [2024-07-13 15:47:13.260662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:42.670 [2024-07-13 15:47:13.263885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:42.670 [2024-07-13 15:47:13.263896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:42.670 15:47:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.670 ************************************ 00:35:42.670 START TEST spdk_target_abort 00:35:42.670 ************************************ 00:35:42.670 15:47:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:42.670 15:47:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:42.670 15:47:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:42.670 15:47:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.670 15:47:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.952 spdk_targetn1 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.952 [2024-07-13 15:47:16.268826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.952 [2024-07-13 15:47:16.301097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.952 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.953 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.953 15:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.953 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.240 Initializing NVMe Controllers 00:35:49.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:49.240 Initialization complete. Launching workers. 00:35:49.240 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 0 00:35:49.240 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7294 00:35:49.240 success 694, unsuccess 548, failed 0 00:35:49.240 15:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.240 15:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.240 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.528 Initializing NVMe Controllers 00:35:52.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:52.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:52.528 Initialization complete. Launching workers. 00:35:52.528 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8480, failed: 0 00:35:52.528 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7247 00:35:52.528 success 311, unsuccess 922, failed 0 00:35:52.528 15:47:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:52.528 15:47:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:52.528 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.815 Initializing NVMe Controllers 00:35:55.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:55.815 Initialization complete. Launching workers. 00:35:55.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28186, failed: 0 00:35:55.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2731, failed to submit 25455 00:35:55.815 success 314, unsuccess 2417, failed 0 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.815 15:47:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1282982 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1282982 ']' 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1282982 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:56.751 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1282982 00:35:56.752 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:56.752 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:56.752 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1282982' 00:35:56.752 killing process with pid 1282982 00:35:56.752 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1282982 00:35:56.752 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1282982 00:35:57.011 00:35:57.011 real 0m14.262s 00:35:57.011 user 0m53.052s 00:35:57.011 sys 0m3.002s 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:57.011 ************************************ 00:35:57.011 END TEST spdk_target_abort 00:35:57.011 ************************************ 00:35:57.011 15:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:35:57.011 15:47:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:57.011 15:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:57.011 15:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:57.011 15:47:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:57.011 ************************************ 00:35:57.011 START TEST kernel_target_abort 00:35:57.011 ************************************ 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:57.011 15:47:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:58.389 Waiting for block devices as requested 00:35:58.389 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:58.389 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:58.389 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:58.678 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:58.678 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:58.678 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:58.678 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:58.678 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:58.939 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:58.939 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:58.939 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:58.939 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:59.199 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:59.199 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:59.199 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:59.199 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:59.459 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:59.459 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:59.719 No valid GPT data, bailing 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:59.719 00:35:59.719 Discovery Log Number of Records 2, Generation counter 2 00:35:59.719 =====Discovery Log Entry 0====== 00:35:59.719 trtype: tcp 00:35:59.719 adrfam: ipv4 00:35:59.719 subtype: current discovery subsystem 00:35:59.719 treq: not specified, sq flow control disable supported 00:35:59.719 portid: 1 00:35:59.719 trsvcid: 4420 00:35:59.719 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:59.719 traddr: 10.0.0.1 00:35:59.719 eflags: none 00:35:59.719 sectype: none 00:35:59.719 =====Discovery Log Entry 1====== 00:35:59.719 trtype: tcp 00:35:59.719 adrfam: ipv4 00:35:59.719 subtype: nvme subsystem 00:35:59.719 treq: not specified, sq flow control disable supported 00:35:59.719 portid: 1 00:35:59.719 trsvcid: 4420 00:35:59.719 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:59.719 traddr: 10.0.0.1 00:35:59.719 eflags: none 00:35:59.719 sectype: none 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:59.719 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:59.720 15:47:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:59.720 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.009 Initializing NVMe Controllers 00:36:03.009 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.009 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.009 Initialization complete. Launching workers. 00:36:03.009 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30290, failed: 0 00:36:03.009 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30290, failed to submit 0 00:36:03.009 success 0, unsuccess 30290, failed 0 00:36:03.009 15:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.009 15:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.009 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.296 Initializing NVMe Controllers 00:36:06.296 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:06.296 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:06.296 Initialization complete. Launching workers. 00:36:06.296 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61053, failed: 0 00:36:06.296 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15390, failed to submit 45663 00:36:06.296 success 0, unsuccess 15390, failed 0 00:36:06.296 15:47:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:06.296 15:47:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:06.296 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.584 Initializing NVMe Controllers 00:36:09.584 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:09.584 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:09.584 Initialization complete. Launching workers. 00:36:09.584 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59740, failed: 0 00:36:09.584 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14906, failed to submit 44834 00:36:09.584 success 0, unsuccess 14906, failed 0 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:09.584 15:47:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:10.152 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:10.152 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:10.152 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:10.152 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:10.152 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:10.152 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:10.411 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:10.411 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:10.411 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:10.411 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:11.346 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:11.346 00:36:11.346 real 0m14.275s 00:36:11.346 user 0m4.986s 00:36:11.346 sys 0m3.317s 00:36:11.346 15:47:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:11.346 15:47:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.346 ************************************ 00:36:11.346 END TEST kernel_target_abort 00:36:11.346 ************************************ 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:11.346 rmmod nvme_tcp 00:36:11.346 rmmod nvme_fabrics 00:36:11.346 rmmod nvme_keyring 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1282982 ']' 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1282982 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1282982 ']' 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1282982 00:36:11.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1282982) - No such process 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1282982 is not found' 00:36:11.346 Process with pid 1282982 is not found 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:11.346 15:47:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:12.723 Waiting for block devices as requested 00:36:12.723 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:12.723 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:12.723 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:12.723 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:12.982 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:12.982 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:12.982 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:12.982 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:13.241 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:13.241 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:13.241 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:13.241 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:13.501 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:13.501 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:13.501 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:13.501 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:13.759 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.759 15:47:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.294 15:47:46 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:16.294 00:36:16.294 real 0m37.717s 00:36:16.294 user 1m0.124s 00:36:16.294 sys 0m9.492s 00:36:16.294 15:47:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.294 15:47:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:16.294 ************************************ 00:36:16.294 END TEST nvmf_abort_qd_sizes 00:36:16.294 ************************************ 00:36:16.294 15:47:46 -- common/autotest_common.sh@1142 -- # return 0 00:36:16.294 15:47:46 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:16.294 15:47:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:16.294 15:47:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:16.294 15:47:46 -- common/autotest_common.sh@10 -- # set +x 00:36:16.294 ************************************ 00:36:16.294 START TEST keyring_file 00:36:16.294 ************************************ 00:36:16.294 15:47:46 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:16.294 * Looking for test storage... 00:36:16.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:16.294 15:47:46 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:16.294 15:47:46 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.294 15:47:46 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.295 15:47:46 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.295 15:47:46 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.295 15:47:46 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.295 15:47:46 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.295 15:47:46 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.295 15:47:46 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.295 15:47:46 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:16.295 15:47:46 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zgSTxJazCG 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zgSTxJazCG 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zgSTxJazCG 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zgSTxJazCG 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eqV5U8Sixp 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:16.295 15:47:46 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eqV5U8Sixp 00:36:16.295 15:47:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eqV5U8Sixp 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eqV5U8Sixp 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@30 -- # tgtpid=1288733 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:16.295 15:47:46 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1288733 00:36:16.295 15:47:46 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1288733 ']' 00:36:16.295 15:47:46 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.295 15:47:46 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:16.295 15:47:46 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.295 15:47:46 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:16.295 15:47:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:16.295 [2024-07-13 15:47:46.686353] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:16.295 [2024-07-13 15:47:46.686450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288733 ] 00:36:16.295 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.295 [2024-07-13 15:47:46.718706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:16.295 [2024-07-13 15:47:46.744961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.295 [2024-07-13 15:47:46.833662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:16.553 15:47:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:16.553 [2024-07-13 15:47:47.093318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.553 null0 00:36:16.553 [2024-07-13 15:47:47.125381] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:16.553 [2024-07-13 15:47:47.125817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:16.553 [2024-07-13 15:47:47.133382] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.553 15:47:47 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.553 15:47:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:16.554 [2024-07-13 15:47:47.141401] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:16.554 request: 00:36:16.554 { 00:36:16.554 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.554 "secure_channel": false, 00:36:16.554 "listen_address": { 00:36:16.554 "trtype": "tcp", 00:36:16.554 "traddr": "127.0.0.1", 00:36:16.554 "trsvcid": "4420" 00:36:16.554 }, 00:36:16.554 "method": "nvmf_subsystem_add_listener", 00:36:16.554 "req_id": 1 00:36:16.554 } 00:36:16.554 Got JSON-RPC error response 00:36:16.554 response: 00:36:16.554 { 00:36:16.554 "code": -32602, 00:36:16.554 "message": "Invalid parameters" 00:36:16.554 } 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:16.554 15:47:47 keyring_file -- keyring/file.sh@46 -- # bperfpid=1288745 00:36:16.554 15:47:47 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:16.554 15:47:47 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1288745 /var/tmp/bperf.sock 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1288745 ']' 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:16.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:16.554 15:47:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:16.554 [2024-07-13 15:47:47.190144] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:16.554 [2024-07-13 15:47:47.190230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288745 ] 00:36:16.554 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.554 [2024-07-13 15:47:47.222473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:16.554 [2024-07-13 15:47:47.253434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.812 [2024-07-13 15:47:47.344730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.812 15:47:47 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:16.812 15:47:47 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:16.812 15:47:47 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:16.812 15:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:17.069 15:47:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eqV5U8Sixp 00:36:17.069 15:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eqV5U8Sixp 00:36:17.356 15:47:47 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:17.356 15:47:47 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:17.356 15:47:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.356 15:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.356 15:47:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.615 15:47:48 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.zgSTxJazCG == \/\t\m\p\/\t\m\p\.\z\g\S\T\x\J\a\z\C\G ]] 00:36:17.615 15:47:48 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:17.615 15:47:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:17.615 15:47:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.615 15:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.615 15:47:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:17.873 15:47:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eqV5U8Sixp == \/\t\m\p\/\t\m\p\.\e\q\V\5\U\8\S\i\x\p ]] 00:36:17.873 15:47:48 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:17.873 15:47:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.873 15:47:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.873 15:47:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.873 15:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.873 15:47:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.130 15:47:48 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:18.130 15:47:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:18.130 15:47:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:18.130 15:47:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.130 15:47:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.130 15:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.130 15:47:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:18.387 15:47:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:18.387 15:47:48 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.387 15:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:18.645 [2024-07-13 15:47:49.205917] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:18.645 nvme0n1 00:36:18.645 15:47:49 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:18.645 15:47:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.645 15:47:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.645 15:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.645 15:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.645 15:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.903 15:47:49 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:18.903 15:47:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:18.903 15:47:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:18.903 15:47:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.903 15:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.903 15:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.903 15:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.161 15:47:49 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:19.161 15:47:49 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:19.161 Running I/O for 1 seconds... 00:36:20.535 00:36:20.535 Latency(us) 00:36:20.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.535 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:20.535 nvme0n1 : 1.03 4711.56 18.40 0.00 0.00 26803.24 8495.41 36311.80 00:36:20.535 =================================================================================================================== 00:36:20.535 Total : 4711.56 18.40 0.00 0.00 26803.24 8495.41 36311.80 00:36:20.535 0 00:36:20.535 15:47:50 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:20.535 15:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:20.535 15:47:51 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:20.535 15:47:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.535 15:47:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.535 15:47:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.535 15:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.535 15:47:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.793 15:47:51 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:20.793 15:47:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:20.793 15:47:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:20.793 15:47:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.793 15:47:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.793 15:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.793 15:47:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:21.052 15:47:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:21.052 15:47:51 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:21.052 15:47:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:21.052 15:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:21.310 [2024-07-13 15:47:51.935330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:21.310 [2024-07-13 15:47:51.935878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a87b0 (107): Transport endpoint is not connected 00:36:21.310 [2024-07-13 15:47:51.936850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a87b0 (9): Bad file descriptor 00:36:21.310 [2024-07-13 15:47:51.937849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:21.310 [2024-07-13 15:47:51.937878] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:21.310 [2024-07-13 15:47:51.937910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:21.310 request: 00:36:21.310 { 00:36:21.310 "name": "nvme0", 00:36:21.310 "trtype": "tcp", 00:36:21.310 "traddr": "127.0.0.1", 00:36:21.310 "adrfam": "ipv4", 00:36:21.310 "trsvcid": "4420", 00:36:21.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.310 "prchk_reftag": false, 00:36:21.310 "prchk_guard": false, 00:36:21.310 "hdgst": false, 00:36:21.310 "ddgst": false, 00:36:21.310 "psk": "key1", 00:36:21.310 "method": "bdev_nvme_attach_controller", 00:36:21.310 "req_id": 1 00:36:21.310 } 00:36:21.310 Got JSON-RPC error response 00:36:21.310 response: 00:36:21.310 { 00:36:21.310 "code": -5, 00:36:21.310 "message": "Input/output error" 00:36:21.310 } 00:36:21.310 15:47:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:21.310 15:47:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:21.310 15:47:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:21.310 15:47:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:21.310 15:47:51 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:21.310 15:47:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:21.310 15:47:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.310 15:47:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.310 15:47:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.310 15:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.568 15:47:52 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:21.568 15:47:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:21.568 15:47:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:21.568 15:47:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.568 15:47:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.568 15:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:21.568 15:47:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:21.827 15:47:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:21.827 15:47:52 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:21.827 15:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:22.085 15:47:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:22.085 15:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:22.341 15:47:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:22.341 15:47:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:22.341 15:47:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.598 15:47:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:22.598 15:47:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.zgSTxJazCG 00:36:22.598 15:47:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.598 15:47:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:22.598 15:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:22.854 [2024-07-13 15:47:53.427739] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zgSTxJazCG': 0100660 00:36:22.854 [2024-07-13 15:47:53.427781] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:22.854 request: 00:36:22.854 { 00:36:22.854 "name": "key0", 00:36:22.854 "path": "/tmp/tmp.zgSTxJazCG", 00:36:22.854 "method": "keyring_file_add_key", 00:36:22.854 "req_id": 1 00:36:22.854 } 00:36:22.854 Got JSON-RPC error response 00:36:22.854 response: 00:36:22.854 { 00:36:22.854 "code": -1, 00:36:22.854 "message": "Operation not permitted" 00:36:22.854 } 00:36:22.855 15:47:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:22.855 15:47:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:22.855 15:47:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:22.855 15:47:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:22.855 15:47:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.zgSTxJazCG 00:36:22.855 15:47:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:22.855 15:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zgSTxJazCG 00:36:23.112 15:47:53 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.zgSTxJazCG 00:36:23.112 15:47:53 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:23.112 15:47:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:23.112 15:47:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.112 15:47:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.112 15:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.112 15:47:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.368 15:47:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:23.368 15:47:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:23.368 15:47:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.368 15:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.625 [2024-07-13 15:47:54.173791] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zgSTxJazCG': No such file or directory 00:36:23.625 [2024-07-13 15:47:54.173828] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:23.626 [2024-07-13 15:47:54.173860] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:23.626 [2024-07-13 15:47:54.173881] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:23.626 [2024-07-13 15:47:54.173895] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:23.626 request: 00:36:23.626 { 00:36:23.626 "name": "nvme0", 00:36:23.626 "trtype": "tcp", 00:36:23.626 "traddr": "127.0.0.1", 00:36:23.626 "adrfam": "ipv4", 00:36:23.626 "trsvcid": "4420", 00:36:23.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:23.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:23.626 "prchk_reftag": false, 00:36:23.626 "prchk_guard": false, 00:36:23.626 "hdgst": false, 00:36:23.626 "ddgst": false, 00:36:23.626 "psk": "key0", 00:36:23.626 "method": "bdev_nvme_attach_controller", 00:36:23.626 "req_id": 1 00:36:23.626 } 00:36:23.626 Got JSON-RPC error response 00:36:23.626 response: 00:36:23.626 { 00:36:23.626 "code": -19, 00:36:23.626 "message": "No such device" 00:36:23.626 } 00:36:23.626 15:47:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:23.626 15:47:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:23.626 15:47:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:23.626 15:47:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:23.626 15:47:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:23.626 15:47:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:23.883 15:47:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SwuDY2xk3E 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:23.883 15:47:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:23.883 15:47:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:23.883 15:47:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:23.883 15:47:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:23.883 15:47:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:23.883 15:47:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SwuDY2xk3E 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SwuDY2xk3E 00:36:23.883 15:47:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.SwuDY2xk3E 00:36:23.883 15:47:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SwuDY2xk3E 00:36:23.883 15:47:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SwuDY2xk3E 00:36:24.140 15:47:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.140 15:47:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.395 nvme0n1 00:36:24.395 15:47:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:24.395 15:47:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:24.395 15:47:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:24.395 15:47:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.395 15:47:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.395 15:47:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:24.652 15:47:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:24.652 15:47:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:24.652 15:47:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:24.910 15:47:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:24.910 15:47:55 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:24.910 15:47:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.910 15:47:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.910 15:47:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.167 15:47:55 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:25.167 15:47:55 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:25.167 15:47:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.167 15:47:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.167 15:47:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.167 15:47:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.167 15:47:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.423 15:47:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:25.423 15:47:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:25.423 15:47:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:25.681 15:47:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:25.681 15:47:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:25.681 15:47:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.938 15:47:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:25.938 15:47:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SwuDY2xk3E 00:36:25.938 15:47:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SwuDY2xk3E 00:36:26.195 15:47:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eqV5U8Sixp 00:36:26.195 15:47:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eqV5U8Sixp 00:36:26.453 15:47:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.453 15:47:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.710 nvme0n1 00:36:26.710 15:47:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:26.710 15:47:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:26.968 15:47:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:26.968 "subsystems": [ 00:36:26.968 { 00:36:26.968 "subsystem": "keyring", 00:36:26.968 "config": [ 00:36:26.968 { 00:36:26.968 "method": "keyring_file_add_key", 00:36:26.968 "params": { 00:36:26.968 "name": "key0", 00:36:26.968 "path": "/tmp/tmp.SwuDY2xk3E" 00:36:26.968 } 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "method": "keyring_file_add_key", 00:36:26.968 "params": { 00:36:26.968 "name": "key1", 00:36:26.968 "path": "/tmp/tmp.eqV5U8Sixp" 00:36:26.968 } 00:36:26.968 } 00:36:26.968 ] 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "subsystem": "iobuf", 00:36:26.968 "config": [ 00:36:26.968 { 00:36:26.968 "method": "iobuf_set_options", 00:36:26.968 "params": { 00:36:26.968 "small_pool_count": 8192, 00:36:26.968 "large_pool_count": 1024, 00:36:26.968 "small_bufsize": 8192, 00:36:26.968 "large_bufsize": 135168 00:36:26.968 } 00:36:26.968 } 00:36:26.968 ] 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "subsystem": "sock", 00:36:26.968 "config": [ 00:36:26.968 { 00:36:26.968 "method": "sock_set_default_impl", 00:36:26.968 "params": { 00:36:26.968 "impl_name": "posix" 00:36:26.968 } 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "method": "sock_impl_set_options", 00:36:26.968 "params": { 00:36:26.968 "impl_name": "ssl", 00:36:26.968 "recv_buf_size": 4096, 00:36:26.968 "send_buf_size": 4096, 00:36:26.968 "enable_recv_pipe": true, 00:36:26.968 "enable_quickack": false, 00:36:26.968 "enable_placement_id": 0, 00:36:26.968 "enable_zerocopy_send_server": true, 00:36:26.968 "enable_zerocopy_send_client": false, 00:36:26.968 "zerocopy_threshold": 0, 00:36:26.968 "tls_version": 0, 00:36:26.968 "enable_ktls": false 00:36:26.968 } 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "method": "sock_impl_set_options", 00:36:26.968 "params": { 00:36:26.968 "impl_name": "posix", 00:36:26.968 "recv_buf_size": 2097152, 00:36:26.968 "send_buf_size": 2097152, 00:36:26.968 "enable_recv_pipe": true, 00:36:26.968 "enable_quickack": false, 00:36:26.968 "enable_placement_id": 0, 00:36:26.968 "enable_zerocopy_send_server": true, 00:36:26.968 "enable_zerocopy_send_client": false, 00:36:26.968 "zerocopy_threshold": 0, 00:36:26.968 "tls_version": 0, 00:36:26.968 "enable_ktls": false 00:36:26.968 } 00:36:26.968 } 00:36:26.968 ] 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "subsystem": "vmd", 00:36:26.968 "config": [] 00:36:26.968 }, 00:36:26.968 { 00:36:26.968 "subsystem": "accel", 00:36:26.969 "config": [ 00:36:26.969 { 00:36:26.969 "method": "accel_set_options", 00:36:26.969 "params": { 00:36:26.969 "small_cache_size": 128, 00:36:26.969 "large_cache_size": 16, 00:36:26.969 "task_count": 2048, 00:36:26.969 "sequence_count": 2048, 00:36:26.969 "buf_count": 2048 00:36:26.969 } 00:36:26.969 } 00:36:26.969 ] 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "subsystem": "bdev", 00:36:26.969 "config": [ 00:36:26.969 { 00:36:26.969 "method": "bdev_set_options", 00:36:26.969 "params": { 00:36:26.969 "bdev_io_pool_size": 65535, 00:36:26.969 "bdev_io_cache_size": 256, 00:36:26.969 "bdev_auto_examine": true, 00:36:26.969 "iobuf_small_cache_size": 128, 00:36:26.969 "iobuf_large_cache_size": 16 00:36:26.969 } 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "method": "bdev_raid_set_options", 00:36:26.969 "params": { 00:36:26.969 "process_window_size_kb": 1024 00:36:26.969 } 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "method": "bdev_iscsi_set_options", 00:36:26.969 "params": { 00:36:26.969 "timeout_sec": 30 00:36:26.969 } 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "method": "bdev_nvme_set_options", 00:36:26.969 "params": { 00:36:26.969 "action_on_timeout": "none", 00:36:26.969 "timeout_us": 0, 00:36:26.969 "timeout_admin_us": 0, 00:36:26.969 "keep_alive_timeout_ms": 10000, 00:36:26.969 "arbitration_burst": 0, 00:36:26.969 "low_priority_weight": 0, 00:36:26.969 "medium_priority_weight": 0, 00:36:26.969 "high_priority_weight": 0, 00:36:26.969 "nvme_adminq_poll_period_us": 10000, 00:36:26.969 "nvme_ioq_poll_period_us": 0, 00:36:26.969 "io_queue_requests": 512, 00:36:26.969 "delay_cmd_submit": true, 00:36:26.969 "transport_retry_count": 4, 00:36:26.969 "bdev_retry_count": 3, 00:36:26.969 "transport_ack_timeout": 0, 00:36:26.969 "ctrlr_loss_timeout_sec": 0, 00:36:26.969 "reconnect_delay_sec": 0, 00:36:26.969 "fast_io_fail_timeout_sec": 0, 00:36:26.969 "disable_auto_failback": false, 00:36:26.969 "generate_uuids": false, 00:36:26.969 "transport_tos": 0, 00:36:26.969 "nvme_error_stat": false, 00:36:26.969 "rdma_srq_size": 0, 00:36:26.969 "io_path_stat": false, 00:36:26.969 "allow_accel_sequence": false, 00:36:26.969 "rdma_max_cq_size": 0, 00:36:26.969 "rdma_cm_event_timeout_ms": 0, 00:36:26.969 "dhchap_digests": [ 00:36:26.969 "sha256", 00:36:26.969 "sha384", 00:36:26.969 "sha512" 00:36:26.969 ], 00:36:26.969 "dhchap_dhgroups": [ 00:36:26.969 "null", 00:36:26.969 "ffdhe2048", 00:36:26.969 "ffdhe3072", 00:36:26.969 "ffdhe4096", 00:36:26.969 "ffdhe6144", 00:36:26.969 "ffdhe8192" 00:36:26.969 ] 00:36:26.969 } 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "method": "bdev_nvme_attach_controller", 00:36:26.969 "params": { 00:36:26.969 "name": "nvme0", 00:36:26.969 "trtype": "TCP", 00:36:26.969 "adrfam": "IPv4", 00:36:26.969 "traddr": "127.0.0.1", 00:36:26.969 "trsvcid": "4420", 00:36:26.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.969 "prchk_reftag": false, 00:36:26.969 "prchk_guard": false, 00:36:26.969 "ctrlr_loss_timeout_sec": 0, 00:36:26.969 "reconnect_delay_sec": 0, 00:36:26.969 "fast_io_fail_timeout_sec": 0, 00:36:26.969 "psk": "key0", 00:36:26.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:26.969 "hdgst": false, 00:36:26.969 "ddgst": false 00:36:26.969 } 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "method": "bdev_nvme_set_hotplug", 00:36:26.969 "params": { 00:36:26.969 "period_us": 100000, 00:36:26.969 "enable": false 00:36:26.969 } 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "method": "bdev_wait_for_examine" 00:36:26.969 } 00:36:26.969 ] 00:36:26.969 }, 00:36:26.969 { 00:36:26.969 "subsystem": "nbd", 00:36:26.969 "config": [] 00:36:26.969 } 00:36:26.969 ] 00:36:26.969 }' 00:36:26.969 15:47:57 keyring_file -- keyring/file.sh@114 -- # killprocess 1288745 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1288745 ']' 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1288745 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288745 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288745' 00:36:26.969 killing process with pid 1288745 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@967 -- # kill 1288745 00:36:26.969 Received shutdown signal, test time was about 1.000000 seconds 00:36:26.969 00:36:26.969 Latency(us) 00:36:26.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.969 =================================================================================================================== 00:36:26.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:26.969 15:47:57 keyring_file -- common/autotest_common.sh@972 -- # wait 1288745 00:36:27.228 15:47:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=1290187 00:36:27.228 15:47:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1290187 /var/tmp/bperf.sock 00:36:27.228 15:47:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1290187 ']' 00:36:27.228 15:47:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:27.228 15:47:57 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:27.228 15:47:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:27.228 15:47:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:27.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:27.228 15:47:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:27.228 "subsystems": [ 00:36:27.228 { 00:36:27.228 "subsystem": "keyring", 00:36:27.228 "config": [ 00:36:27.228 { 00:36:27.228 "method": "keyring_file_add_key", 00:36:27.228 "params": { 00:36:27.228 "name": "key0", 00:36:27.228 "path": "/tmp/tmp.SwuDY2xk3E" 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "keyring_file_add_key", 00:36:27.228 "params": { 00:36:27.228 "name": "key1", 00:36:27.228 "path": "/tmp/tmp.eqV5U8Sixp" 00:36:27.228 } 00:36:27.228 } 00:36:27.228 ] 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "subsystem": "iobuf", 00:36:27.228 "config": [ 00:36:27.228 { 00:36:27.228 "method": "iobuf_set_options", 00:36:27.228 "params": { 00:36:27.228 "small_pool_count": 8192, 00:36:27.228 "large_pool_count": 1024, 00:36:27.228 "small_bufsize": 8192, 00:36:27.228 "large_bufsize": 135168 00:36:27.228 } 00:36:27.228 } 00:36:27.228 ] 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "subsystem": "sock", 00:36:27.228 "config": [ 00:36:27.228 { 00:36:27.228 "method": "sock_set_default_impl", 00:36:27.228 "params": { 00:36:27.228 "impl_name": "posix" 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "sock_impl_set_options", 00:36:27.228 "params": { 00:36:27.228 "impl_name": "ssl", 00:36:27.228 "recv_buf_size": 4096, 00:36:27.228 "send_buf_size": 4096, 00:36:27.228 "enable_recv_pipe": true, 00:36:27.228 "enable_quickack": false, 00:36:27.228 "enable_placement_id": 0, 00:36:27.228 "enable_zerocopy_send_server": true, 00:36:27.228 "enable_zerocopy_send_client": false, 00:36:27.228 "zerocopy_threshold": 0, 00:36:27.228 "tls_version": 0, 00:36:27.228 "enable_ktls": false 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "sock_impl_set_options", 00:36:27.228 "params": { 00:36:27.228 "impl_name": "posix", 00:36:27.228 "recv_buf_size": 2097152, 00:36:27.228 "send_buf_size": 2097152, 00:36:27.228 "enable_recv_pipe": true, 00:36:27.228 "enable_quickack": false, 00:36:27.228 "enable_placement_id": 0, 00:36:27.228 "enable_zerocopy_send_server": true, 00:36:27.228 "enable_zerocopy_send_client": false, 00:36:27.228 "zerocopy_threshold": 0, 00:36:27.228 "tls_version": 0, 00:36:27.228 "enable_ktls": false 00:36:27.228 } 00:36:27.228 } 00:36:27.228 ] 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "subsystem": "vmd", 00:36:27.228 "config": [] 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "subsystem": "accel", 00:36:27.228 "config": [ 00:36:27.228 { 00:36:27.228 "method": "accel_set_options", 00:36:27.228 "params": { 00:36:27.228 "small_cache_size": 128, 00:36:27.228 "large_cache_size": 16, 00:36:27.228 "task_count": 2048, 00:36:27.228 "sequence_count": 2048, 00:36:27.228 "buf_count": 2048 00:36:27.228 } 00:36:27.228 } 00:36:27.228 ] 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "subsystem": "bdev", 00:36:27.228 "config": [ 00:36:27.228 { 00:36:27.228 "method": "bdev_set_options", 00:36:27.228 "params": { 00:36:27.228 "bdev_io_pool_size": 65535, 00:36:27.228 "bdev_io_cache_size": 256, 00:36:27.228 "bdev_auto_examine": true, 00:36:27.228 "iobuf_small_cache_size": 128, 00:36:27.228 "iobuf_large_cache_size": 16 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "bdev_raid_set_options", 00:36:27.228 "params": { 00:36:27.228 "process_window_size_kb": 1024 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "bdev_iscsi_set_options", 00:36:27.228 "params": { 00:36:27.228 "timeout_sec": 30 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "bdev_nvme_set_options", 00:36:27.228 "params": { 00:36:27.228 "action_on_timeout": "none", 00:36:27.228 "timeout_us": 0, 00:36:27.228 "timeout_admin_us": 0, 00:36:27.228 "keep_alive_timeout_ms": 10000, 00:36:27.228 "arbitration_burst": 0, 00:36:27.228 "low_priority_weight": 0, 00:36:27.228 "medium_priority_weight": 0, 00:36:27.228 "high_priority_weight": 0, 00:36:27.228 "nvme_adminq_poll_period_us": 10000, 00:36:27.228 "nvme_ioq_poll_period_us": 0, 00:36:27.228 "io_queue_requests": 512, 00:36:27.228 "delay_cmd_submit": true, 00:36:27.228 "transport_retry_count": 4, 00:36:27.228 "bdev_retry_count": 3, 00:36:27.228 "transport_ack_timeout": 0, 00:36:27.228 "ctrlr_loss_timeout_sec": 0, 00:36:27.228 "reconnect_delay_sec": 0, 00:36:27.228 "fast_io_fail_timeout_sec": 0, 00:36:27.228 "disable_auto_failback": false, 00:36:27.228 "generate_uuids": false, 00:36:27.228 "transport_tos": 0, 00:36:27.228 "nvme_error_stat": false, 00:36:27.228 "rdma_srq_size": 0, 00:36:27.228 "io_path_stat": false, 00:36:27.228 "allow_accel_sequence": false, 00:36:27.228 "rdma_max_cq_size": 0, 00:36:27.228 "rdma_cm_event_timeout_ms": 0, 00:36:27.228 "dhchap_digests": [ 00:36:27.228 "sha256", 00:36:27.228 "sha384", 00:36:27.228 "sha512" 00:36:27.228 ], 00:36:27.228 "dhchap_dhgroups": [ 00:36:27.228 "null", 00:36:27.228 "ffdhe2048", 00:36:27.228 "ffdhe3072", 00:36:27.228 "ffdhe4096", 00:36:27.228 "ffdhe6144", 00:36:27.228 "ffdhe8192" 00:36:27.228 ] 00:36:27.228 } 00:36:27.228 }, 00:36:27.228 { 00:36:27.228 "method": "bdev_nvme_attach_controller", 00:36:27.228 "params": { 00:36:27.228 "name": "nvme0", 00:36:27.228 "trtype": "TCP", 00:36:27.228 "adrfam": "IPv4", 00:36:27.228 "traddr": "127.0.0.1", 00:36:27.228 "trsvcid": "4420", 00:36:27.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.228 "prchk_reftag": false, 00:36:27.228 "prchk_guard": false, 00:36:27.228 "ctrlr_loss_timeout_sec": 0, 00:36:27.228 "reconnect_delay_sec": 0, 00:36:27.228 "fast_io_fail_timeout_sec": 0, 00:36:27.228 "psk": "key0", 00:36:27.228 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.228 "hdgst": false, 00:36:27.228 "ddgst": false 00:36:27.229 } 00:36:27.229 }, 00:36:27.229 { 00:36:27.229 "method": "bdev_nvme_set_hotplug", 00:36:27.229 "params": { 00:36:27.229 "period_us": 100000, 00:36:27.229 "enable": false 00:36:27.229 } 00:36:27.229 }, 00:36:27.229 { 00:36:27.229 "method": "bdev_wait_for_examine" 00:36:27.229 } 00:36:27.229 ] 00:36:27.229 }, 00:36:27.229 { 00:36:27.229 "subsystem": "nbd", 00:36:27.229 "config": [] 00:36:27.229 } 00:36:27.229 ] 00:36:27.229 }' 00:36:27.229 15:47:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:27.229 15:47:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:27.229 [2024-07-13 15:47:57.928357] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:27.229 [2024-07-13 15:47:57.928436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290187 ] 00:36:27.229 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.229 [2024-07-13 15:47:57.958678] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:27.229 [2024-07-13 15:47:57.989835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.487 [2024-07-13 15:47:58.079321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.744 [2024-07-13 15:47:58.270442] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:28.328 15:47:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:28.328 15:47:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:28.328 15:47:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:28.328 15:47:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:28.328 15:47:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.585 15:47:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:28.585 15:47:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:28.585 15:47:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.585 15:47:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.585 15:47:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.585 15:47:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.585 15:47:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.842 15:47:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:28.842 15:47:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:28.842 15:47:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:28.842 15:47:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.842 15:47:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.842 15:47:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.842 15:47:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:29.100 15:47:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SwuDY2xk3E /tmp/tmp.eqV5U8Sixp 00:36:29.100 15:47:59 keyring_file -- keyring/file.sh@20 -- # killprocess 1290187 00:36:29.100 15:47:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1290187 ']' 00:36:29.100 15:47:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1290187 00:36:29.100 15:47:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290187 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290187' 00:36:29.358 killing process with pid 1290187 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@967 -- # kill 1290187 00:36:29.358 Received shutdown signal, test time was about 1.000000 seconds 00:36:29.358 00:36:29.358 Latency(us) 00:36:29.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.358 =================================================================================================================== 00:36:29.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:29.358 15:47:59 keyring_file -- common/autotest_common.sh@972 -- # wait 1290187 00:36:29.358 15:48:00 keyring_file -- keyring/file.sh@21 -- # killprocess 1288733 00:36:29.358 15:48:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1288733 ']' 00:36:29.358 15:48:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1288733 00:36:29.358 15:48:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:29.358 15:48:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:29.358 15:48:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288733 00:36:29.617 15:48:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:29.617 15:48:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:29.617 15:48:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288733' 00:36:29.617 killing process with pid 1288733 00:36:29.617 15:48:00 keyring_file -- common/autotest_common.sh@967 -- # kill 1288733 00:36:29.617 [2024-07-13 15:48:00.144501] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:29.617 15:48:00 keyring_file -- common/autotest_common.sh@972 -- # wait 1288733 00:36:29.875 00:36:29.875 real 0m14.050s 00:36:29.875 user 0m34.685s 00:36:29.875 sys 0m3.266s 00:36:29.875 15:48:00 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:29.875 15:48:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:29.875 ************************************ 00:36:29.875 END TEST keyring_file 00:36:29.875 ************************************ 00:36:29.875 15:48:00 -- common/autotest_common.sh@1142 -- # return 0 00:36:29.875 15:48:00 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:29.875 15:48:00 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:29.875 15:48:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:29.875 15:48:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:29.875 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:36:29.875 ************************************ 00:36:29.875 START TEST keyring_linux 00:36:29.875 ************************************ 00:36:29.875 15:48:00 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:29.875 * Looking for test storage... 00:36:29.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:30.160 15:48:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:30.160 15:48:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.160 15:48:00 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.160 15:48:00 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.160 15:48:00 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.160 15:48:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.160 15:48:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.160 15:48:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.160 15:48:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:30.160 15:48:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.160 15:48:00 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.160 15:48:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:30.160 15:48:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:30.160 15:48:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:30.160 15:48:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:30.160 15:48:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:30.160 15:48:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:30.161 15:48:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:30.161 /tmp/:spdk-test:key0 00:36:30.161 15:48:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:30.161 15:48:00 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:30.161 15:48:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:30.161 /tmp/:spdk-test:key1 00:36:30.161 15:48:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1290610 00:36:30.161 15:48:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:30.161 15:48:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1290610 00:36:30.161 15:48:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1290610 ']' 00:36:30.161 15:48:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.161 15:48:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:30.161 15:48:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.161 15:48:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:30.161 15:48:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.161 [2024-07-13 15:48:00.786252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:30.161 [2024-07-13 15:48:00.786350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290610 ] 00:36:30.161 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.161 [2024-07-13 15:48:00.818556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:30.161 [2024-07-13 15:48:00.845129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.419 [2024-07-13 15:48:00.937572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:30.678 15:48:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.678 [2024-07-13 15:48:01.191127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.678 null0 00:36:30.678 [2024-07-13 15:48:01.223196] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:30.678 [2024-07-13 15:48:01.223660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.678 15:48:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:30.678 1004912825 00:36:30.678 15:48:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:30.678 240845972 00:36:30.678 15:48:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1290725 00:36:30.678 15:48:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:30.678 15:48:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1290725 /var/tmp/bperf.sock 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1290725 ']' 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:30.678 15:48:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.678 [2024-07-13 15:48:01.288437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.07.0-rc2 initialization... 00:36:30.678 [2024-07-13 15:48:01.288521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290725 ] 00:36:30.678 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.678 [2024-07-13 15:48:01.325681] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc2 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:30.678 [2024-07-13 15:48:01.358125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.936 [2024-07-13 15:48:01.452689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.936 15:48:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.936 15:48:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:30.936 15:48:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:30.936 15:48:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:31.193 15:48:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:31.193 15:48:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:31.450 15:48:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:31.450 15:48:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:31.707 [2024-07-13 15:48:02.370999] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:31.707 nvme0n1 00:36:31.707 15:48:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:31.707 15:48:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:31.707 15:48:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:31.707 15:48:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:31.707 15:48:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:31.707 15:48:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.964 15:48:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:31.964 15:48:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:31.964 15:48:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:31.964 15:48:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:31.964 15:48:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.964 15:48:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.964 15:48:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@25 -- # sn=1004912825 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 1004912825 == \1\0\0\4\9\1\2\8\2\5 ]] 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1004912825 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:32.226 15:48:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:32.488 Running I/O for 1 seconds... 00:36:33.420 00:36:33.420 Latency(us) 00:36:33.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.420 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:33.420 nvme0n1 : 1.02 4374.25 17.09 0.00 0.00 28982.32 11650.84 45826.65 00:36:33.420 =================================================================================================================== 00:36:33.420 Total : 4374.25 17.09 0.00 0.00 28982.32 11650.84 45826.65 00:36:33.420 0 00:36:33.420 15:48:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:33.420 15:48:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:33.677 15:48:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:33.678 15:48:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:33.678 15:48:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:33.678 15:48:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:33.678 15:48:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:33.678 15:48:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.935 15:48:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:33.935 15:48:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:33.935 15:48:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:33.935 15:48:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.935 15:48:04 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.935 15:48:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:34.193 [2024-07-13 15:48:04.866380] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:34.193 [2024-07-13 15:48:04.866681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e9690 (107): Transport endpoint is not connected 00:36:34.193 [2024-07-13 15:48:04.867674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e9690 (9): Bad file descriptor 00:36:34.193 [2024-07-13 15:48:04.868671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:34.193 [2024-07-13 15:48:04.868695] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:34.193 [2024-07-13 15:48:04.868712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:34.193 request: 00:36:34.193 { 00:36:34.193 "name": "nvme0", 00:36:34.193 "trtype": "tcp", 00:36:34.193 "traddr": "127.0.0.1", 00:36:34.193 "adrfam": "ipv4", 00:36:34.193 "trsvcid": "4420", 00:36:34.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:34.193 "prchk_reftag": false, 00:36:34.193 "prchk_guard": false, 00:36:34.193 "hdgst": false, 00:36:34.193 "ddgst": false, 00:36:34.193 "psk": ":spdk-test:key1", 00:36:34.193 "method": "bdev_nvme_attach_controller", 00:36:34.193 "req_id": 1 00:36:34.193 } 00:36:34.193 Got JSON-RPC error response 00:36:34.193 response: 00:36:34.193 { 00:36:34.193 "code": -5, 00:36:34.193 "message": "Input/output error" 00:36:34.193 } 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@33 -- # sn=1004912825 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1004912825 00:36:34.193 1 links removed 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@33 -- # sn=240845972 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 240845972 00:36:34.193 1 links removed 00:36:34.193 15:48:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1290725 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1290725 ']' 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1290725 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290725 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290725' 00:36:34.193 killing process with pid 1290725 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 1290725 00:36:34.193 Received shutdown signal, test time was about 1.000000 seconds 00:36:34.193 00:36:34.193 Latency(us) 00:36:34.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.193 =================================================================================================================== 00:36:34.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:34.193 15:48:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 1290725 00:36:34.451 15:48:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1290610 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1290610 ']' 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1290610 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290610 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290610' 00:36:34.451 killing process with pid 1290610 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@967 -- # kill 1290610 00:36:34.451 15:48:05 keyring_linux -- common/autotest_common.sh@972 -- # wait 1290610 00:36:35.016 00:36:35.016 real 0m5.023s 00:36:35.016 user 0m9.443s 00:36:35.016 sys 0m1.551s 00:36:35.016 15:48:05 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:35.016 15:48:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:35.016 ************************************ 00:36:35.016 END TEST keyring_linux 00:36:35.016 ************************************ 00:36:35.016 15:48:05 -- common/autotest_common.sh@1142 -- # return 0 00:36:35.016 15:48:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:35.016 15:48:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:35.016 15:48:05 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:35.016 15:48:05 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:35.016 15:48:05 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:35.016 15:48:05 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:35.016 15:48:05 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:35.016 15:48:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:35.016 15:48:05 -- common/autotest_common.sh@10 -- # set +x 00:36:35.016 15:48:05 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:35.016 15:48:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:35.016 15:48:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:35.016 15:48:05 -- common/autotest_common.sh@10 -- # set +x 00:36:36.913 INFO: APP EXITING 00:36:36.913 INFO: killing all VMs 00:36:36.913 INFO: killing vhost app 00:36:36.913 INFO: EXIT DONE 00:36:37.846 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:37.846 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:37.846 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:37.846 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:37.846 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:37.846 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:37.846 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:37.846 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:37.846 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:37.846 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:37.846 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:37.846 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:37.846 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:37.846 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:37.846 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:37.846 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:37.846 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:39.222 Cleaning 00:36:39.222 Removing: /var/run/dpdk/spdk0/config 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:39.222 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:39.222 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:39.222 Removing: /var/run/dpdk/spdk1/config 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:39.222 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:39.222 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:39.222 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:39.222 Removing: /var/run/dpdk/spdk2/config 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:39.222 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:39.222 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:39.222 Removing: /var/run/dpdk/spdk3/config 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:39.222 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:39.222 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:39.222 Removing: /var/run/dpdk/spdk4/config 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:39.222 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:39.222 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:39.222 Removing: /dev/shm/bdev_svc_trace.1 00:36:39.222 Removing: /dev/shm/nvmf_trace.0 00:36:39.222 Removing: /dev/shm/spdk_tgt_trace.pid970609 00:36:39.222 Removing: /var/run/dpdk/spdk0 00:36:39.222 Removing: /var/run/dpdk/spdk1 00:36:39.222 Removing: /var/run/dpdk/spdk2 00:36:39.222 Removing: /var/run/dpdk/spdk3 00:36:39.222 Removing: /var/run/dpdk/spdk4 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1042074 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1044677 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1052016 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1055303 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1057643 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1058049 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1062003 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1065844 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1065847 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1066502 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1067040 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1067699 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1068112 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1068227 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1068370 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1068497 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1068505 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1069162 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1069699 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1070354 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1070755 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1070757 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1071021 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1071900 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1072620 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1078462 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1078737 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1081236 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1084866 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1086972 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1093231 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1098422 00:36:39.222 Removing: /var/run/dpdk/spdk_pid1099613 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1100276 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1110551 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1113008 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1138494 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1141289 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1142471 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1143778 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1143919 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1143935 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1144069 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1144499 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1145796 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1146415 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1146842 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1148453 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1148761 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1149319 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1151719 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1154974 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1158495 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1181839 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1184598 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1188239 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1189181 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1190270 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1192811 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1195152 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1199859 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1199861 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1202622 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1202763 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1202949 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1203276 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1203287 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1204362 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1205539 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1206715 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1207895 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1209076 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1210328 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1214055 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1214505 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1215782 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1216516 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1220225 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1222074 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1225596 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1229552 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1235756 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1240107 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1240109 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1252303 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1252713 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1253113 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1253642 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1254174 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1254628 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1255036 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1255445 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1257933 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1258077 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1262483 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1262653 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1264258 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1269192 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1269291 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1272077 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1273480 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1274912 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1275730 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1277141 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1278013 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1283350 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1283668 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1284062 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1285622 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1286018 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1286292 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1288733 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1288745 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1290187 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1290610 00:36:39.482 Removing: /var/run/dpdk/spdk_pid1290725 00:36:39.482 Removing: /var/run/dpdk/spdk_pid968987 00:36:39.482 Removing: /var/run/dpdk/spdk_pid969717 00:36:39.482 Removing: /var/run/dpdk/spdk_pid970609 00:36:39.482 Removing: /var/run/dpdk/spdk_pid970969 00:36:39.482 Removing: /var/run/dpdk/spdk_pid971660 00:36:39.482 Removing: /var/run/dpdk/spdk_pid971802 00:36:39.482 Removing: /var/run/dpdk/spdk_pid972517 00:36:39.482 Removing: /var/run/dpdk/spdk_pid972533 00:36:39.482 Removing: /var/run/dpdk/spdk_pid972775 00:36:39.482 Removing: /var/run/dpdk/spdk_pid974035 00:36:39.482 Removing: /var/run/dpdk/spdk_pid974872 00:36:39.482 Removing: /var/run/dpdk/spdk_pid975177 00:36:39.482 Removing: /var/run/dpdk/spdk_pid975367 00:36:39.482 Removing: /var/run/dpdk/spdk_pid975569 00:36:39.482 Removing: /var/run/dpdk/spdk_pid975765 00:36:39.482 Removing: /var/run/dpdk/spdk_pid975922 00:36:39.482 Removing: /var/run/dpdk/spdk_pid976078 00:36:39.482 Removing: /var/run/dpdk/spdk_pid976257 00:36:39.482 Removing: /var/run/dpdk/spdk_pid976568 00:36:39.482 Removing: /var/run/dpdk/spdk_pid978923 00:36:39.482 Removing: /var/run/dpdk/spdk_pid979085 00:36:39.482 Removing: /var/run/dpdk/spdk_pid979253 00:36:39.741 Removing: /var/run/dpdk/spdk_pid979377 00:36:39.741 Removing: /var/run/dpdk/spdk_pid979689 00:36:39.741 Removing: /var/run/dpdk/spdk_pid979692 00:36:39.741 Removing: /var/run/dpdk/spdk_pid980123 00:36:39.741 Removing: /var/run/dpdk/spdk_pid980126 00:36:39.741 Removing: /var/run/dpdk/spdk_pid980418 00:36:39.741 Removing: /var/run/dpdk/spdk_pid980426 00:36:39.741 Removing: /var/run/dpdk/spdk_pid980594 00:36:39.741 Removing: /var/run/dpdk/spdk_pid980718 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981093 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981247 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981438 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981609 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981753 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981822 00:36:39.741 Removing: /var/run/dpdk/spdk_pid981979 00:36:39.741 Removing: /var/run/dpdk/spdk_pid982247 00:36:39.741 Removing: /var/run/dpdk/spdk_pid982411 00:36:39.741 Removing: /var/run/dpdk/spdk_pid982567 00:36:39.741 Removing: /var/run/dpdk/spdk_pid982733 00:36:39.741 Removing: /var/run/dpdk/spdk_pid982996 00:36:39.741 Removing: /var/run/dpdk/spdk_pid983152 00:36:39.741 Removing: /var/run/dpdk/spdk_pid983312 00:36:39.741 Removing: /var/run/dpdk/spdk_pid983581 00:36:39.741 Removing: /var/run/dpdk/spdk_pid983764 00:36:39.741 Removing: /var/run/dpdk/spdk_pid984001 00:36:39.741 Removing: /var/run/dpdk/spdk_pid984174 00:36:39.741 Removing: /var/run/dpdk/spdk_pid984440 00:36:39.741 Removing: /var/run/dpdk/spdk_pid984597 00:36:39.741 Removing: /var/run/dpdk/spdk_pid984915 00:36:39.741 Removing: /var/run/dpdk/spdk_pid985512 00:36:39.741 Removing: /var/run/dpdk/spdk_pid985694 00:36:39.741 Removing: /var/run/dpdk/spdk_pid985855 00:36:39.741 Removing: /var/run/dpdk/spdk_pid986017 00:36:39.741 Removing: /var/run/dpdk/spdk_pid986289 00:36:39.741 Removing: /var/run/dpdk/spdk_pid986365 00:36:39.741 Removing: /var/run/dpdk/spdk_pid986569 00:36:39.741 Removing: /var/run/dpdk/spdk_pid988744 00:36:39.741 Clean 00:36:39.741 15:48:10 -- common/autotest_common.sh@1451 -- # return 0 00:36:39.741 15:48:10 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:39.741 15:48:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:39.741 15:48:10 -- common/autotest_common.sh@10 -- # set +x 00:36:39.741 15:48:10 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:39.741 15:48:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:39.741 15:48:10 -- common/autotest_common.sh@10 -- # set +x 00:36:39.741 15:48:10 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:39.741 15:48:10 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:39.741 15:48:10 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:39.741 15:48:10 -- spdk/autotest.sh@391 -- # hash lcov 00:36:39.741 15:48:10 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:39.741 15:48:10 -- spdk/autotest.sh@393 -- # hostname 00:36:39.741 15:48:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:39.999 geninfo: WARNING: invalid characters removed from testname! 00:37:12.089 15:48:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:12.089 15:48:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.266 15:48:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:19.542 15:48:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:23.720 15:48:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:26.246 15:48:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:28.775 15:48:59 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:29.034 15:48:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.035 15:48:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:29.035 15:48:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.035 15:48:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.035 15:48:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.035 15:48:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.035 15:48:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.035 15:48:59 -- paths/export.sh@5 -- $ export PATH 00:37:29.035 15:48:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.035 15:48:59 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:29.035 15:48:59 -- common/autobuild_common.sh@444 -- $ date +%s 00:37:29.035 15:48:59 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720878539.XXXXXX 00:37:29.035 15:48:59 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720878539.JeUABB 00:37:29.035 15:48:59 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:37:29.035 15:48:59 -- common/autobuild_common.sh@450 -- $ '[' -n main ']' 00:37:29.035 15:48:59 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:29.035 15:48:59 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:29.035 15:48:59 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:29.035 15:48:59 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:29.035 15:48:59 -- common/autobuild_common.sh@460 -- $ get_config_params 00:37:29.035 15:48:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:29.035 15:48:59 -- common/autotest_common.sh@10 -- $ set +x 00:37:29.035 15:48:59 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:29.035 15:48:59 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:37:29.035 15:48:59 -- pm/common@17 -- $ local monitor 00:37:29.035 15:48:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.035 15:48:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.035 15:48:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.035 15:48:59 -- pm/common@21 -- $ date +%s 00:37:29.035 15:48:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.035 15:48:59 -- pm/common@21 -- $ date +%s 00:37:29.035 15:48:59 -- pm/common@25 -- $ sleep 1 00:37:29.035 15:48:59 -- pm/common@21 -- $ date +%s 00:37:29.035 15:48:59 -- pm/common@21 -- $ date +%s 00:37:29.035 15:48:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720878539 00:37:29.035 15:48:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720878539 00:37:29.035 15:48:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720878539 00:37:29.035 15:48:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720878539 00:37:29.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720878539_collect-vmstat.pm.log 00:37:29.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720878539_collect-cpu-load.pm.log 00:37:29.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720878539_collect-cpu-temp.pm.log 00:37:29.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720878539_collect-bmc-pm.bmc.pm.log 00:37:29.972 15:49:00 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:37:29.972 15:49:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:29.972 15:49:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:29.972 15:49:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:29.972 15:49:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:29.972 15:49:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:29.972 15:49:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:29.972 15:49:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:29.972 15:49:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:29.972 15:49:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:29.972 15:49:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:29.972 15:49:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:29.972 15:49:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:29.972 15:49:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:29.972 15:49:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.972 15:49:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:29.972 15:49:00 -- pm/common@44 -- $ pid=1302617 00:37:29.972 15:49:00 -- pm/common@50 -- $ kill -TERM 1302617 00:37:29.972 15:49:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.972 15:49:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:29.972 15:49:00 -- pm/common@44 -- $ pid=1302619 00:37:29.972 15:49:00 -- pm/common@50 -- $ kill -TERM 1302619 00:37:29.972 15:49:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.972 15:49:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:29.972 15:49:00 -- pm/common@44 -- $ pid=1302621 00:37:29.972 15:49:00 -- pm/common@50 -- $ kill -TERM 1302621 00:37:29.972 15:49:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:29.972 15:49:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:29.972 15:49:00 -- pm/common@44 -- $ pid=1302650 00:37:29.972 15:49:00 -- pm/common@50 -- $ sudo -E kill -TERM 1302650 00:37:29.972 + [[ -n 868946 ]] 00:37:29.972 + sudo kill 868946 00:37:29.983 [Pipeline] } 00:37:30.003 [Pipeline] // stage 00:37:30.009 [Pipeline] } 00:37:30.028 [Pipeline] // timeout 00:37:30.033 [Pipeline] } 00:37:30.051 [Pipeline] // catchError 00:37:30.057 [Pipeline] } 00:37:30.075 [Pipeline] // wrap 00:37:30.081 [Pipeline] } 00:37:30.097 [Pipeline] // catchError 00:37:30.107 [Pipeline] stage 00:37:30.109 [Pipeline] { (Epilogue) 00:37:30.125 [Pipeline] catchError 00:37:30.127 [Pipeline] { 00:37:30.142 [Pipeline] echo 00:37:30.144 Cleanup processes 00:37:30.151 [Pipeline] sh 00:37:30.437 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.437 1302781 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:30.437 1302881 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.452 [Pipeline] sh 00:37:30.765 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:30.765 ++ grep -v 'sudo pgrep' 00:37:30.765 ++ awk '{print $1}' 00:37:30.765 + sudo kill -9 1302781 00:37:30.778 [Pipeline] sh 00:37:31.064 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:43.275 [Pipeline] sh 00:37:43.562 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:43.562 Artifacts sizes are good 00:37:43.581 [Pipeline] archiveArtifacts 00:37:43.588 Archiving artifacts 00:37:43.832 [Pipeline] sh 00:37:44.117 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:44.132 [Pipeline] cleanWs 00:37:44.142 [WS-CLEANUP] Deleting project workspace... 00:37:44.142 [WS-CLEANUP] Deferred wipeout is used... 00:37:44.151 [WS-CLEANUP] done 00:37:44.152 [Pipeline] } 00:37:44.172 [Pipeline] // catchError 00:37:44.184 [Pipeline] sh 00:37:44.465 + logger -p user.info -t JENKINS-CI 00:37:44.474 [Pipeline] } 00:37:44.490 [Pipeline] // stage 00:37:44.495 [Pipeline] } 00:37:44.513 [Pipeline] // node 00:37:44.518 [Pipeline] End of Pipeline 00:37:44.553 Finished: SUCCESS